2026-04-04 00:00:09.632494 | Job console starting 2026-04-04 00:00:09.679647 | Updating git repos 2026-04-04 00:00:09.769183 | Cloning repos into workspace 2026-04-04 00:00:10.237350 | Restoring repo states 2026-04-04 00:00:10.260216 | Merging changes 2026-04-04 00:00:10.260237 | Checking out repos 2026-04-04 00:00:10.754471 | Preparing playbooks 2026-04-04 00:00:11.937879 | Running Ansible setup 2026-04-04 00:00:20.111478 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-04 00:00:21.893824 | 2026-04-04 00:00:21.893955 | PLAY [Base pre] 2026-04-04 00:00:21.916502 | 2026-04-04 00:00:21.916626 | TASK [Setup log path fact] 2026-04-04 00:00:21.936483 | orchestrator | ok 2026-04-04 00:00:21.959477 | 2026-04-04 00:00:21.959609 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-04 00:00:22.000260 | orchestrator | ok 2026-04-04 00:00:22.012181 | 2026-04-04 00:00:22.012299 | TASK [emit-job-header : Print job information] 2026-04-04 00:00:22.069404 | # Job Information 2026-04-04 00:00:22.069663 | Ansible Version: 2.16.14 2026-04-04 00:00:22.069708 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-04 00:00:22.069759 | Pipeline: periodic-midnight 2026-04-04 00:00:22.069788 | Executor: 521e9411259a 2026-04-04 00:00:22.069809 | Triggered by: https://github.com/osism/testbed 2026-04-04 00:00:22.069831 | Event ID: d88433eb18ea4d9ba93b69c4517821a5 2026-04-04 00:00:22.098180 | 2026-04-04 00:00:22.098298 | LOOP [emit-job-header : Print node information] 2026-04-04 00:00:22.373283 | orchestrator | ok: 2026-04-04 00:00:22.373487 | orchestrator | # Node Information 2026-04-04 00:00:22.373522 | orchestrator | Inventory Hostname: orchestrator 2026-04-04 00:00:22.373548 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-04 00:00:22.373570 | orchestrator | Username: zuul-testbed05 2026-04-04 00:00:22.373591 | orchestrator | Distro: Debian 12.13 2026-04-04 00:00:22.373614 | orchestrator | Provider: static-testbed 2026-04-04 00:00:22.373635 | orchestrator | Region: 2026-04-04 00:00:22.373656 | orchestrator | Label: testbed-orchestrator 2026-04-04 00:00:22.373676 | orchestrator | Product Name: OpenStack Nova 2026-04-04 00:00:22.373695 | orchestrator | Interface IP: 81.163.193.140 2026-04-04 00:00:22.397065 | 2026-04-04 00:00:22.397185 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-04 00:00:23.623801 | orchestrator -> localhost | changed 2026-04-04 00:00:23.630188 | 2026-04-04 00:00:23.630287 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-04 00:00:26.313567 | orchestrator -> localhost | changed 2026-04-04 00:00:26.327166 | 2026-04-04 00:00:26.327271 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-04 00:00:27.095143 | orchestrator -> localhost | ok 2026-04-04 00:00:27.101228 | 2026-04-04 00:00:27.101324 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-04 00:00:27.139545 | orchestrator | ok 2026-04-04 00:00:27.179292 | orchestrator | included: /var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-04 00:00:27.192575 | 2026-04-04 00:00:27.192672 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-04 00:00:32.019619 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-04 00:00:32.019817 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/65ae36fb71e247b4b6ac5f1c3db290c9_id_rsa 2026-04-04 00:00:32.019860 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/65ae36fb71e247b4b6ac5f1c3db290c9_id_rsa.pub 2026-04-04 00:00:32.019887 | orchestrator -> localhost | The key fingerprint is: 2026-04-04 00:00:32.019910 | orchestrator -> localhost | SHA256:f2kiFMZq9ub+wZONkfdjfQSkf6vsX0KAZZuwf2iS/Yw zuul-build-sshkey 2026-04-04 00:00:32.019928 | orchestrator -> localhost | The key's randomart image is: 2026-04-04 00:00:32.019955 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-04 00:00:32.019974 | orchestrator -> localhost | | . o . | 2026-04-04 00:00:32.020061 | orchestrator -> localhost | | . * = | 2026-04-04 00:00:32.020090 | orchestrator -> localhost | | + o = . | 2026-04-04 00:00:32.020113 | orchestrator -> localhost | | o . = + . | 2026-04-04 00:00:32.020130 | orchestrator -> localhost | | + S = * + o| 2026-04-04 00:00:32.020153 | orchestrator -> localhost | | o o o O O +.| 2026-04-04 00:00:32.020170 | orchestrator -> localhost | | + O E B.+| 2026-04-04 00:00:32.020187 | orchestrator -> localhost | | o . *...oo| 2026-04-04 00:00:32.020209 | orchestrator -> localhost | | .o.. .+.. | 2026-04-04 00:00:32.020228 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-04 00:00:32.020273 | orchestrator -> localhost | ok: Runtime: 0:00:03.531880 2026-04-04 00:00:32.026604 | 2026-04-04 00:00:32.026684 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-04 00:00:32.133507 | orchestrator | ok 2026-04-04 00:00:32.145673 | orchestrator | included: /var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-04 00:00:32.192144 | 2026-04-04 00:00:32.192965 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-04 00:00:32.230249 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:32.240367 | 2026-04-04 00:00:32.240652 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-04 00:00:33.497042 | orchestrator | changed 2026-04-04 00:00:33.506184 | 2026-04-04 00:00:33.506281 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-04 00:00:33.817603 | orchestrator | ok 2026-04-04 00:00:33.827984 | 2026-04-04 00:00:33.828121 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-04 00:00:34.354001 | orchestrator | ok 2026-04-04 00:00:34.380879 | 2026-04-04 00:00:34.380982 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-04 00:00:34.888066 | orchestrator | ok 2026-04-04 00:00:34.898757 | 2026-04-04 00:00:34.898872 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-04 00:00:34.936006 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:34.943918 | 2026-04-04 00:00:34.944021 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-04 00:00:36.055374 | orchestrator -> localhost | changed 2026-04-04 00:00:36.075808 | 2026-04-04 00:00:36.075905 | TASK [add-build-sshkey : Add back temp key] 2026-04-04 00:00:36.488364 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/65ae36fb71e247b4b6ac5f1c3db290c9_id_rsa (zuul-build-sshkey) 2026-04-04 00:00:36.488541 | orchestrator -> localhost | ok: Runtime: 0:00:00.008746 2026-04-04 00:00:36.494808 | 2026-04-04 00:00:36.494901 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-04 00:00:37.140268 | orchestrator | ok 2026-04-04 00:00:37.144936 | 2026-04-04 00:00:37.145018 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-04 00:00:37.168014 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:37.245727 | 2026-04-04 00:00:37.245828 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-04 00:00:37.923688 | orchestrator | ok 2026-04-04 00:00:37.944823 | 2026-04-04 00:00:37.944927 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-04 00:00:37.997397 | orchestrator | ok 2026-04-04 00:00:38.011295 | 2026-04-04 00:00:38.011396 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-04 00:00:38.894729 | orchestrator -> localhost | ok 2026-04-04 00:00:38.901758 | 2026-04-04 00:00:38.901850 | TASK [validate-host : Collect information about the host] 2026-04-04 00:00:40.577402 | orchestrator | ok 2026-04-04 00:00:40.616021 | 2026-04-04 00:00:40.616142 | TASK [validate-host : Sanitize hostname] 2026-04-04 00:00:40.790802 | orchestrator | ok 2026-04-04 00:00:40.795574 | 2026-04-04 00:00:40.798976 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-04 00:00:42.395989 | orchestrator -> localhost | changed 2026-04-04 00:00:42.400971 | 2026-04-04 00:00:42.401078 | TASK [validate-host : Collect information about zuul worker] 2026-04-04 00:00:43.180308 | orchestrator | ok 2026-04-04 00:00:43.184570 | 2026-04-04 00:00:43.184653 | TASK [validate-host : Write out all zuul information for each host] 2026-04-04 00:00:44.418714 | orchestrator -> localhost | changed 2026-04-04 00:00:44.427081 | 2026-04-04 00:00:44.427172 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-04 00:00:44.788930 | orchestrator | ok 2026-04-04 00:00:44.794259 | 2026-04-04 00:00:44.794341 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-04 00:02:14.932727 | orchestrator | changed: 2026-04-04 00:02:14.934378 | orchestrator | .d..t...... src/ 2026-04-04 00:02:14.934456 | orchestrator | .d..t...... src/github.com/ 2026-04-04 00:02:14.934490 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-04 00:02:14.934519 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-04 00:02:14.934545 | orchestrator | RedHat.yml 2026-04-04 00:02:14.951514 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-04 00:02:14.951531 | orchestrator | RedHat.yml 2026-04-04 00:02:14.951584 | orchestrator | = 1.53.0"... 2026-04-04 00:02:25.577718 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-04 00:02:25.719968 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-04 00:02:26.182209 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-04 00:02:26.245214 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-04 00:02:26.933919 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-04 00:02:26.995386 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-04 00:02:27.485263 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-04 00:02:27.485322 | orchestrator | 2026-04-04 00:02:27.485330 | orchestrator | Providers are signed by their developers. 2026-04-04 00:02:27.485335 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-04 00:02:27.485339 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-04 00:02:27.485353 | orchestrator | 2026-04-04 00:02:27.485358 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-04 00:02:27.485362 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-04 00:02:27.485375 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-04 00:02:27.485380 | orchestrator | you run "tofu init" in the future. 2026-04-04 00:02:27.485655 | orchestrator | 2026-04-04 00:02:27.485663 | orchestrator | OpenTofu has been successfully initialized! 2026-04-04 00:02:27.485670 | orchestrator | 2026-04-04 00:02:27.485674 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-04 00:02:27.485685 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-04 00:02:27.485704 | orchestrator | should now work. 2026-04-04 00:02:27.485708 | orchestrator | 2026-04-04 00:02:27.485713 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-04 00:02:27.485716 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-04 00:02:27.485721 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-04 00:02:27.736028 | orchestrator | Created and switched to workspace "ci"! 2026-04-04 00:02:27.736079 | orchestrator | 2026-04-04 00:02:27.736085 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-04 00:02:27.736091 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-04 00:02:27.736110 | orchestrator | for this configuration. 2026-04-04 00:02:27.842055 | orchestrator | ci.auto.tfvars 2026-04-04 00:02:27.845786 | orchestrator | default_custom.tf 2026-04-04 00:02:28.922559 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-04 00:02:29.517425 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-04 00:02:29.787772 | orchestrator | 2026-04-04 00:02:29.787877 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-04 00:02:29.787886 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-04 00:02:29.794410 | orchestrator | + create 2026-04-04 00:02:29.794453 | orchestrator | <= read (data resources) 2026-04-04 00:02:29.794467 | orchestrator | 2026-04-04 00:02:29.794472 | orchestrator | OpenTofu will perform the following actions: 2026-04-04 00:02:29.794599 | orchestrator | 2026-04-04 00:02:29.794612 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-04 00:02:29.794618 | orchestrator | # (config refers to values not yet known) 2026-04-04 00:02:29.794622 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-04 00:02:29.794627 | orchestrator | + checksum = (known after apply) 2026-04-04 00:02:29.794632 | orchestrator | + created_at = (known after apply) 2026-04-04 00:02:29.794636 | orchestrator | + file = (known after apply) 2026-04-04 00:02:29.794640 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.794664 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.794668 | orchestrator | + min_disk_gb = (known after apply) 2026-04-04 00:02:29.794672 | orchestrator | + min_ram_mb = (known after apply) 2026-04-04 00:02:29.794676 | orchestrator | + most_recent = true 2026-04-04 00:02:29.794681 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.794684 | orchestrator | + protected = (known after apply) 2026-04-04 00:02:29.794688 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.794695 | orchestrator | + schema = (known after apply) 2026-04-04 00:02:29.794699 | orchestrator | + size_bytes = (known after apply) 2026-04-04 00:02:29.794703 | orchestrator | + tags = (known after apply) 2026-04-04 00:02:29.794706 | orchestrator | + updated_at = (known after apply) 2026-04-04 00:02:29.794710 | orchestrator | } 2026-04-04 00:02:29.794860 | orchestrator | 2026-04-04 00:02:29.794875 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-04 00:02:29.794879 | orchestrator | # (config refers to values not yet known) 2026-04-04 00:02:29.794884 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-04 00:02:29.794888 | orchestrator | + checksum = (known after apply) 2026-04-04 00:02:29.794891 | orchestrator | + created_at = (known after apply) 2026-04-04 00:02:29.794895 | orchestrator | + file = (known after apply) 2026-04-04 00:02:29.794899 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.794903 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.794907 | orchestrator | + min_disk_gb = (known after apply) 2026-04-04 00:02:29.794911 | orchestrator | + min_ram_mb = (known after apply) 2026-04-04 00:02:29.794914 | orchestrator | + most_recent = true 2026-04-04 00:02:29.794918 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.794922 | orchestrator | + protected = (known after apply) 2026-04-04 00:02:29.794926 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.794930 | orchestrator | + schema = (known after apply) 2026-04-04 00:02:29.794933 | orchestrator | + size_bytes = (known after apply) 2026-04-04 00:02:29.794937 | orchestrator | + tags = (known after apply) 2026-04-04 00:02:29.794941 | orchestrator | + updated_at = (known after apply) 2026-04-04 00:02:29.794945 | orchestrator | } 2026-04-04 00:02:29.795089 | orchestrator | 2026-04-04 00:02:29.795105 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-04 00:02:29.795110 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-04 00:02:29.795114 | orchestrator | + content = (known after apply) 2026-04-04 00:02:29.795118 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:29.795122 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:29.795126 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:29.795130 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:29.795134 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:29.795137 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:29.795141 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:29.795145 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:29.795149 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-04 00:02:29.795153 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795157 | orchestrator | } 2026-04-04 00:02:29.795225 | orchestrator | 2026-04-04 00:02:29.795237 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-04 00:02:29.795241 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-04 00:02:29.795245 | orchestrator | + content = (known after apply) 2026-04-04 00:02:29.795249 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:29.795252 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:29.795256 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:29.795260 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:29.795264 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:29.795268 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:29.795272 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:29.795275 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:29.795287 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-04 00:02:29.795291 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795295 | orchestrator | } 2026-04-04 00:02:29.795372 | orchestrator | 2026-04-04 00:02:29.795388 | orchestrator | # local_file.inventory will be created 2026-04-04 00:02:29.795392 | orchestrator | + resource "local_file" "inventory" { 2026-04-04 00:02:29.795396 | orchestrator | + content = (known after apply) 2026-04-04 00:02:29.795400 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:29.795404 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:29.795408 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:29.795412 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:29.795416 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:29.795420 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:29.795424 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:29.795427 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:29.795431 | orchestrator | + filename = "inventory.ci" 2026-04-04 00:02:29.795435 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795439 | orchestrator | } 2026-04-04 00:02:29.795511 | orchestrator | 2026-04-04 00:02:29.795523 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-04 00:02:29.795585 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-04 00:02:29.795589 | orchestrator | + content = (sensitive value) 2026-04-04 00:02:29.795593 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:29.795597 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:29.795612 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:29.795616 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:29.795620 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:29.795624 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:29.795628 | orchestrator | + directory_permission = "0700" 2026-04-04 00:02:29.795632 | orchestrator | + file_permission = "0600" 2026-04-04 00:02:29.795635 | orchestrator | + filename = ".id_rsa.ci" 2026-04-04 00:02:29.795639 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795643 | orchestrator | } 2026-04-04 00:02:29.795666 | orchestrator | 2026-04-04 00:02:29.795678 | orchestrator | # null_resource.node_semaphore will be created 2026-04-04 00:02:29.795682 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-04 00:02:29.795686 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795690 | orchestrator | } 2026-04-04 00:02:29.795759 | orchestrator | 2026-04-04 00:02:29.795771 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-04 00:02:29.795776 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-04 00:02:29.795780 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.795783 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.795787 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795791 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.795795 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.795820 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-04 00:02:29.795827 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.795833 | orchestrator | + size = 80 2026-04-04 00:02:29.795840 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.795846 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.795852 | orchestrator | } 2026-04-04 00:02:29.795924 | orchestrator | 2026-04-04 00:02:29.795936 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-04 00:02:29.795940 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.795944 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.795948 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.795952 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.795964 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.795968 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.795972 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-04 00:02:29.795976 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.795980 | orchestrator | + size = 80 2026-04-04 00:02:29.795984 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.795988 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.795991 | orchestrator | } 2026-04-04 00:02:29.796053 | orchestrator | 2026-04-04 00:02:29.796064 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-04 00:02:29.796068 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.796072 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796076 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796080 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796084 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.796087 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796091 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-04 00:02:29.796095 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796110 | orchestrator | + size = 80 2026-04-04 00:02:29.796114 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796118 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796122 | orchestrator | } 2026-04-04 00:02:29.796181 | orchestrator | 2026-04-04 00:02:29.796191 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-04 00:02:29.796196 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.796199 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796203 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796207 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796211 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.796214 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796218 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-04 00:02:29.796222 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796226 | orchestrator | + size = 80 2026-04-04 00:02:29.796229 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796233 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796237 | orchestrator | } 2026-04-04 00:02:29.796296 | orchestrator | 2026-04-04 00:02:29.796306 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-04 00:02:29.796310 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.796314 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796318 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796322 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796325 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.796329 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796337 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-04 00:02:29.796341 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796345 | orchestrator | + size = 80 2026-04-04 00:02:29.796349 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796352 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796356 | orchestrator | } 2026-04-04 00:02:29.796412 | orchestrator | 2026-04-04 00:02:29.796422 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-04 00:02:29.796427 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.796431 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796435 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796438 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796447 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.796450 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796454 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-04 00:02:29.796458 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796462 | orchestrator | + size = 80 2026-04-04 00:02:29.796465 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796469 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796473 | orchestrator | } 2026-04-04 00:02:29.796534 | orchestrator | 2026-04-04 00:02:29.796544 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-04 00:02:29.796549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:29.796553 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796556 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796560 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796564 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.796567 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796571 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-04 00:02:29.796575 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796579 | orchestrator | + size = 80 2026-04-04 00:02:29.796583 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796586 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796590 | orchestrator | } 2026-04-04 00:02:29.796793 | orchestrator | 2026-04-04 00:02:29.796849 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-04 00:02:29.796855 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.796859 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796863 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796867 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796871 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796875 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-04 00:02:29.796879 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.796883 | orchestrator | + size = 20 2026-04-04 00:02:29.796887 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.796891 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.796895 | orchestrator | } 2026-04-04 00:02:29.796961 | orchestrator | 2026-04-04 00:02:29.796972 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-04 00:02:29.796977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.796981 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.796985 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.796988 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.796992 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.796996 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-04 00:02:29.797000 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.797004 | orchestrator | + size = 20 2026-04-04 00:02:29.797008 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.797012 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.797015 | orchestrator | } 2026-04-04 00:02:29.797077 | orchestrator | 2026-04-04 00:02:29.797087 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-04 00:02:29.797092 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.797096 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.797099 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.797103 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.797107 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.797111 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-04 00:02:29.797115 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.797124 | orchestrator | + size = 20 2026-04-04 00:02:29.797128 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.797132 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.797136 | orchestrator | } 2026-04-04 00:02:29.797245 | orchestrator | 2026-04-04 00:02:29.797313 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-04 00:02:29.797400 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.797497 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.797502 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.797584 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.797641 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.797742 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-04 00:02:29.797747 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.797751 | orchestrator | + size = 20 2026-04-04 00:02:29.797755 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.797842 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.797848 | orchestrator | } 2026-04-04 00:02:29.798576 | orchestrator | 2026-04-04 00:02:29.798634 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-04 00:02:29.798639 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.798643 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.798646 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.798650 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.798654 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.798658 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-04 00:02:29.798662 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.798672 | orchestrator | + size = 20 2026-04-04 00:02:29.798676 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.798689 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.798693 | orchestrator | } 2026-04-04 00:02:29.798763 | orchestrator | 2026-04-04 00:02:29.798775 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-04 00:02:29.798779 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.798783 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.798787 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.798791 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.798795 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.798830 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-04 00:02:29.798835 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.798838 | orchestrator | + size = 20 2026-04-04 00:02:29.798842 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.798846 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.798850 | orchestrator | } 2026-04-04 00:02:29.798916 | orchestrator | 2026-04-04 00:02:29.798927 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-04 00:02:29.798931 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.798935 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.798939 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.798943 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.798947 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.798951 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-04 00:02:29.798954 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.798958 | orchestrator | + size = 20 2026-04-04 00:02:29.798962 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.798966 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.798969 | orchestrator | } 2026-04-04 00:02:29.799033 | orchestrator | 2026-04-04 00:02:29.799045 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-04 00:02:29.799049 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.799061 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.799065 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.799069 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.799072 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.799076 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-04 00:02:29.799080 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.799084 | orchestrator | + size = 20 2026-04-04 00:02:29.799088 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.799092 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.799095 | orchestrator | } 2026-04-04 00:02:29.799158 | orchestrator | 2026-04-04 00:02:29.799170 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-04 00:02:29.799174 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:29.799178 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:29.799182 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.799185 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.799189 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:29.799193 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-04 00:02:29.799197 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.799201 | orchestrator | + size = 20 2026-04-04 00:02:29.799204 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:29.799208 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:29.799212 | orchestrator | } 2026-04-04 00:02:29.799419 | orchestrator | 2026-04-04 00:02:29.799437 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-04 00:02:29.799441 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-04 00:02:29.799445 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.799449 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.799453 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.799456 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.799460 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.799464 | orchestrator | + config_drive = true 2026-04-04 00:02:29.799468 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.799471 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.799475 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-04 00:02:29.799479 | orchestrator | + force_delete = false 2026-04-04 00:02:29.799483 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.799486 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.799490 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.799494 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.799498 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.799502 | orchestrator | + name = "testbed-manager" 2026-04-04 00:02:29.799505 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.799509 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.799513 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.799517 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.799520 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.799524 | orchestrator | + user_data = (sensitive value) 2026-04-04 00:02:29.799528 | orchestrator | 2026-04-04 00:02:29.799532 | orchestrator | + block_device { 2026-04-04 00:02:29.799536 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.799540 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.799551 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.799555 | orchestrator | + multiattach = false 2026-04-04 00:02:29.799558 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.799562 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.799570 | orchestrator | } 2026-04-04 00:02:29.799574 | orchestrator | 2026-04-04 00:02:29.799578 | orchestrator | + network { 2026-04-04 00:02:29.799582 | orchestrator | + access_network = false 2026-04-04 00:02:29.799586 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.799589 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.799593 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.799597 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.799601 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.799604 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.799608 | orchestrator | } 2026-04-04 00:02:29.799612 | orchestrator | } 2026-04-04 00:02:29.799811 | orchestrator | 2026-04-04 00:02:29.799824 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-04 00:02:29.799829 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.799833 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.799836 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.799840 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.799844 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.799848 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.799852 | orchestrator | + config_drive = true 2026-04-04 00:02:29.799855 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.799859 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.799863 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.799867 | orchestrator | + force_delete = false 2026-04-04 00:02:29.799870 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.799874 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.799878 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.799882 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.799886 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.799889 | orchestrator | + name = "testbed-node-0" 2026-04-04 00:02:29.799893 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.799897 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.799901 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.799904 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.799908 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.799912 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.799916 | orchestrator | 2026-04-04 00:02:29.799923 | orchestrator | + block_device { 2026-04-04 00:02:29.799927 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.799931 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.799934 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.799938 | orchestrator | + multiattach = false 2026-04-04 00:02:29.799942 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.799946 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.799949 | orchestrator | } 2026-04-04 00:02:29.799953 | orchestrator | 2026-04-04 00:02:29.799957 | orchestrator | + network { 2026-04-04 00:02:29.799961 | orchestrator | + access_network = false 2026-04-04 00:02:29.799965 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.799968 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.799972 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.799976 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.799980 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.799984 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.799987 | orchestrator | } 2026-04-04 00:02:29.799991 | orchestrator | } 2026-04-04 00:02:29.800242 | orchestrator | 2026-04-04 00:02:29.800259 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-04 00:02:29.800263 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.800270 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.800279 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.800283 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.800286 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.800290 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.800294 | orchestrator | + config_drive = true 2026-04-04 00:02:29.800298 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.800301 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.800305 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.800309 | orchestrator | + force_delete = false 2026-04-04 00:02:29.800313 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.800317 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.800320 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.800324 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.800328 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.800332 | orchestrator | + name = "testbed-node-1" 2026-04-04 00:02:29.800335 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.800339 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.800343 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.800347 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.800350 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.800354 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.800358 | orchestrator | 2026-04-04 00:02:29.800362 | orchestrator | + block_device { 2026-04-04 00:02:29.800366 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.800370 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.800374 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.800377 | orchestrator | + multiattach = false 2026-04-04 00:02:29.800381 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.800385 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.800389 | orchestrator | } 2026-04-04 00:02:29.800393 | orchestrator | 2026-04-04 00:02:29.800396 | orchestrator | + network { 2026-04-04 00:02:29.800400 | orchestrator | + access_network = false 2026-04-04 00:02:29.800404 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.800408 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.800411 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.800415 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.800419 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.800423 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.800455 | orchestrator | } 2026-04-04 00:02:29.800459 | orchestrator | } 2026-04-04 00:02:29.800868 | orchestrator | 2026-04-04 00:02:29.800949 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-04 00:02:29.800955 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.800959 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.800977 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.800982 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.800987 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.800996 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.801000 | orchestrator | + config_drive = true 2026-04-04 00:02:29.801004 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.801008 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.801012 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.801016 | orchestrator | + force_delete = false 2026-04-04 00:02:29.801020 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.801023 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.801027 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.801036 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.801040 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.801044 | orchestrator | + name = "testbed-node-2" 2026-04-04 00:02:29.801048 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.801081 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.801086 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.801089 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.801093 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.801097 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.801101 | orchestrator | 2026-04-04 00:02:29.801105 | orchestrator | + block_device { 2026-04-04 00:02:29.801109 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.801113 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.801117 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.801120 | orchestrator | + multiattach = false 2026-04-04 00:02:29.801124 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.801128 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801132 | orchestrator | } 2026-04-04 00:02:29.801136 | orchestrator | 2026-04-04 00:02:29.801140 | orchestrator | + network { 2026-04-04 00:02:29.801144 | orchestrator | + access_network = false 2026-04-04 00:02:29.801147 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.801151 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.801155 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.801159 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.801163 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.801166 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801170 | orchestrator | } 2026-04-04 00:02:29.801174 | orchestrator | } 2026-04-04 00:02:29.801378 | orchestrator | 2026-04-04 00:02:29.801391 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-04 00:02:29.801395 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.801399 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.801403 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.801407 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.801411 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.801414 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.801418 | orchestrator | + config_drive = true 2026-04-04 00:02:29.801422 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.801426 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.801429 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.801433 | orchestrator | + force_delete = false 2026-04-04 00:02:29.801437 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.801441 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.801448 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.801452 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.801455 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.801459 | orchestrator | + name = "testbed-node-3" 2026-04-04 00:02:29.801463 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.801467 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.801470 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.801474 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.801478 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.801482 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.801486 | orchestrator | 2026-04-04 00:02:29.801489 | orchestrator | + block_device { 2026-04-04 00:02:29.801497 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.801501 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.801504 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.801513 | orchestrator | + multiattach = false 2026-04-04 00:02:29.801516 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.801520 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801524 | orchestrator | } 2026-04-04 00:02:29.801528 | orchestrator | 2026-04-04 00:02:29.801531 | orchestrator | + network { 2026-04-04 00:02:29.801535 | orchestrator | + access_network = false 2026-04-04 00:02:29.801539 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.801543 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.801547 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.801551 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.801554 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.801558 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801562 | orchestrator | } 2026-04-04 00:02:29.801566 | orchestrator | } 2026-04-04 00:02:29.801765 | orchestrator | 2026-04-04 00:02:29.801778 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-04 00:02:29.801782 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.801789 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.801793 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.801816 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.801820 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.801824 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.801828 | orchestrator | + config_drive = true 2026-04-04 00:02:29.801831 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.801835 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.801839 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.801843 | orchestrator | + force_delete = false 2026-04-04 00:02:29.801846 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.801850 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.801854 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.801858 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.801861 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.801865 | orchestrator | + name = "testbed-node-4" 2026-04-04 00:02:29.801869 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.801872 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.801876 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.801880 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.801884 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.801888 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.801891 | orchestrator | 2026-04-04 00:02:29.801895 | orchestrator | + block_device { 2026-04-04 00:02:29.801899 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.801903 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.801906 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.801910 | orchestrator | + multiattach = false 2026-04-04 00:02:29.801914 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.801917 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801921 | orchestrator | } 2026-04-04 00:02:29.801925 | orchestrator | 2026-04-04 00:02:29.801929 | orchestrator | + network { 2026-04-04 00:02:29.801933 | orchestrator | + access_network = false 2026-04-04 00:02:29.801936 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.801940 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.801944 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.801947 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.801951 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.801955 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.801959 | orchestrator | } 2026-04-04 00:02:29.801962 | orchestrator | } 2026-04-04 00:02:29.802187 | orchestrator | 2026-04-04 00:02:29.802202 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-04 00:02:29.802207 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:29.802211 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:29.802214 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:29.802218 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:29.802222 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.802226 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:29.802230 | orchestrator | + config_drive = true 2026-04-04 00:02:29.802234 | orchestrator | + created = (known after apply) 2026-04-04 00:02:29.802237 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:29.802241 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:29.802245 | orchestrator | + force_delete = false 2026-04-04 00:02:29.802252 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:29.802271 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.802275 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:29.802279 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:29.802291 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:29.802294 | orchestrator | + name = "testbed-node-5" 2026-04-04 00:02:29.802298 | orchestrator | + power_state = "active" 2026-04-04 00:02:29.802302 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.802306 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:29.802317 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:29.802321 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:29.802325 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:29.802341 | orchestrator | 2026-04-04 00:02:29.802345 | orchestrator | + block_device { 2026-04-04 00:02:29.802383 | orchestrator | + boot_index = 0 2026-04-04 00:02:29.802417 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:29.802433 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:29.802437 | orchestrator | + multiattach = false 2026-04-04 00:02:29.802441 | orchestrator | + source_type = "volume" 2026-04-04 00:02:29.802453 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.802457 | orchestrator | } 2026-04-04 00:02:29.802461 | orchestrator | 2026-04-04 00:02:29.802464 | orchestrator | + network { 2026-04-04 00:02:29.802468 | orchestrator | + access_network = false 2026-04-04 00:02:29.802472 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:29.802476 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:29.802480 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:29.802484 | orchestrator | + name = (known after apply) 2026-04-04 00:02:29.802488 | orchestrator | + port = (known after apply) 2026-04-04 00:02:29.802491 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:29.802495 | orchestrator | } 2026-04-04 00:02:29.802499 | orchestrator | } 2026-04-04 00:02:29.802647 | orchestrator | 2026-04-04 00:02:29.802660 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-04 00:02:29.802664 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-04 00:02:29.802677 | orchestrator | + fingerprint = (known after apply) 2026-04-04 00:02:29.802688 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.802692 | orchestrator | + name = "testbed" 2026-04-04 00:02:29.802696 | orchestrator | + private_key = (sensitive value) 2026-04-04 00:02:29.802700 | orchestrator | + public_key = (known after apply) 2026-04-04 00:02:29.802737 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.802781 | orchestrator | + user_id = (known after apply) 2026-04-04 00:02:29.802842 | orchestrator | } 2026-04-04 00:02:29.802934 | orchestrator | 2026-04-04 00:02:29.802955 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-04 00:02:29.802973 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.802983 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.802995 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803028 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803033 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803037 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803049 | orchestrator | } 2026-04-04 00:02:29.803099 | orchestrator | 2026-04-04 00:02:29.803118 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-04 00:02:29.803131 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803135 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803138 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803142 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803146 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803150 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803154 | orchestrator | } 2026-04-04 00:02:29.803200 | orchestrator | 2026-04-04 00:02:29.803212 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-04 00:02:29.803216 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803220 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803224 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803227 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803231 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803235 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803239 | orchestrator | } 2026-04-04 00:02:29.803289 | orchestrator | 2026-04-04 00:02:29.803301 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-04 00:02:29.803306 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803309 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803313 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803317 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803321 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803324 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803328 | orchestrator | } 2026-04-04 00:02:29.803369 | orchestrator | 2026-04-04 00:02:29.803383 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-04 00:02:29.803387 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803391 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803398 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803402 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803409 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803413 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803417 | orchestrator | } 2026-04-04 00:02:29.803490 | orchestrator | 2026-04-04 00:02:29.803507 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-04 00:02:29.803511 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803515 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803519 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803522 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803526 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803530 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803534 | orchestrator | } 2026-04-04 00:02:29.803605 | orchestrator | 2026-04-04 00:02:29.803617 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-04 00:02:29.803621 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803625 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803629 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803633 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803637 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803646 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803650 | orchestrator | } 2026-04-04 00:02:29.803709 | orchestrator | 2026-04-04 00:02:29.803725 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-04 00:02:29.803729 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803733 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803737 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803741 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803745 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803749 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803752 | orchestrator | } 2026-04-04 00:02:29.803820 | orchestrator | 2026-04-04 00:02:29.803835 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-04 00:02:29.803840 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:29.803844 | orchestrator | + device = (known after apply) 2026-04-04 00:02:29.803848 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803852 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:29.803856 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803860 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:29.803863 | orchestrator | } 2026-04-04 00:02:29.803904 | orchestrator | 2026-04-04 00:02:29.803914 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-04 00:02:29.803920 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-04 00:02:29.803923 | orchestrator | + fixed_ip = (known after apply) 2026-04-04 00:02:29.803927 | orchestrator | + floating_ip = (known after apply) 2026-04-04 00:02:29.803931 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.803935 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:29.803938 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.803942 | orchestrator | } 2026-04-04 00:02:29.804013 | orchestrator | 2026-04-04 00:02:29.804024 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-04 00:02:29.804028 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-04 00:02:29.804032 | orchestrator | + address = (known after apply) 2026-04-04 00:02:29.804036 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.804040 | orchestrator | + dns_domain = (known after apply) 2026-04-04 00:02:29.804044 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.804048 | orchestrator | + fixed_ip = (known after apply) 2026-04-04 00:02:29.804051 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.804055 | orchestrator | + pool = "public" 2026-04-04 00:02:29.804059 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:29.804063 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.804067 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.804071 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.804075 | orchestrator | } 2026-04-04 00:02:29.804165 | orchestrator | 2026-04-04 00:02:29.804177 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-04 00:02:29.804181 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-04 00:02:29.804185 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.804189 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.804193 | orchestrator | + availability_zone_hints = [ 2026-04-04 00:02:29.804197 | orchestrator | + "nova", 2026-04-04 00:02:29.804201 | orchestrator | ] 2026-04-04 00:02:29.804205 | orchestrator | + dns_domain = (known after apply) 2026-04-04 00:02:29.804209 | orchestrator | + external = (known after apply) 2026-04-04 00:02:29.804213 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.804216 | orchestrator | + mtu = (known after apply) 2026-04-04 00:02:29.804220 | orchestrator | + name = "net-testbed-management" 2026-04-04 00:02:29.804224 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.804232 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.804236 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.804240 | orchestrator | + shared = (known after apply) 2026-04-04 00:02:29.804244 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.804248 | orchestrator | + transparent_vlan = (known after apply) 2026-04-04 00:02:29.804252 | orchestrator | 2026-04-04 00:02:29.804256 | orchestrator | + segments (known after apply) 2026-04-04 00:02:29.804259 | orchestrator | } 2026-04-04 00:02:29.804415 | orchestrator | 2026-04-04 00:02:29.804427 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-04 00:02:29.804431 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-04 00:02:29.804435 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.804439 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.804443 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.804450 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.804454 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.804458 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.804462 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.804466 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.804469 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.804473 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.804477 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.804481 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.804485 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.804488 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.804492 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.804496 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.804500 | orchestrator | 2026-04-04 00:02:29.804503 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.804507 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.804511 | orchestrator | } 2026-04-04 00:02:29.804515 | orchestrator | 2026-04-04 00:02:29.804519 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.804523 | orchestrator | 2026-04-04 00:02:29.804527 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.804530 | orchestrator | + ip_address = "192.168.16.5" 2026-04-04 00:02:29.804534 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.804538 | orchestrator | } 2026-04-04 00:02:29.804542 | orchestrator | } 2026-04-04 00:02:29.804703 | orchestrator | 2026-04-04 00:02:29.804730 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-04 00:02:29.804738 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.804742 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.804746 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.804750 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.804754 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.804758 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.804761 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.804889 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.804902 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.804906 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.804910 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.804914 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.804918 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.804922 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.804975 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.804991 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.804995 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.804999 | orchestrator | 2026-04-04 00:02:29.805003 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805007 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.805010 | orchestrator | } 2026-04-04 00:02:29.805014 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805057 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.805061 | orchestrator | } 2026-04-04 00:02:29.805065 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805069 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.805080 | orchestrator | } 2026-04-04 00:02:29.805084 | orchestrator | 2026-04-04 00:02:29.805088 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.805092 | orchestrator | 2026-04-04 00:02:29.805095 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.805099 | orchestrator | + ip_address = "192.168.16.10" 2026-04-04 00:02:29.805131 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.805136 | orchestrator | } 2026-04-04 00:02:29.805140 | orchestrator | } 2026-04-04 00:02:29.805419 | orchestrator | 2026-04-04 00:02:29.805436 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-04 00:02:29.805441 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.805445 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.805449 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.805453 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.805457 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.805461 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.805465 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.805469 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.805472 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.805476 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.805480 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.805484 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.805488 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.805492 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.805496 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.805500 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.805503 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.805507 | orchestrator | 2026-04-04 00:02:29.805511 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805515 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.805519 | orchestrator | } 2026-04-04 00:02:29.805523 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805527 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.805531 | orchestrator | } 2026-04-04 00:02:29.805535 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805539 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.805543 | orchestrator | } 2026-04-04 00:02:29.805547 | orchestrator | 2026-04-04 00:02:29.805551 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.805555 | orchestrator | 2026-04-04 00:02:29.805558 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.805563 | orchestrator | + ip_address = "192.168.16.11" 2026-04-04 00:02:29.805567 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.805571 | orchestrator | } 2026-04-04 00:02:29.805574 | orchestrator | } 2026-04-04 00:02:29.805718 | orchestrator | 2026-04-04 00:02:29.805730 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-04 00:02:29.805735 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.805739 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.805743 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.805747 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.805751 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.805762 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.805766 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.805770 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.805774 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.805782 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.805786 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.805790 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.805794 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.805831 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.805836 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.805840 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.805844 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.805847 | orchestrator | 2026-04-04 00:02:29.805851 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805855 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.805859 | orchestrator | } 2026-04-04 00:02:29.805863 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805867 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.805870 | orchestrator | } 2026-04-04 00:02:29.805874 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.805878 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.805882 | orchestrator | } 2026-04-04 00:02:29.805886 | orchestrator | 2026-04-04 00:02:29.805889 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.805893 | orchestrator | 2026-04-04 00:02:29.805897 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.805901 | orchestrator | + ip_address = "192.168.16.12" 2026-04-04 00:02:29.805904 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.805908 | orchestrator | } 2026-04-04 00:02:29.805912 | orchestrator | } 2026-04-04 00:02:29.806087 | orchestrator | 2026-04-04 00:02:29.806101 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-04 00:02:29.806106 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.806110 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.806113 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.806118 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.806122 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.806125 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.806129 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.806133 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.806137 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.806141 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.806144 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.806148 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.806152 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.806156 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.806159 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.806163 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.806167 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.806172 | orchestrator | 2026-04-04 00:02:29.806176 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806180 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.806184 | orchestrator | } 2026-04-04 00:02:29.806187 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806191 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.806195 | orchestrator | } 2026-04-04 00:02:29.806199 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806203 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.806206 | orchestrator | } 2026-04-04 00:02:29.806210 | orchestrator | 2026-04-04 00:02:29.806220 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.806224 | orchestrator | 2026-04-04 00:02:29.806228 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.806231 | orchestrator | + ip_address = "192.168.16.13" 2026-04-04 00:02:29.806235 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.806239 | orchestrator | } 2026-04-04 00:02:29.806243 | orchestrator | } 2026-04-04 00:02:29.806384 | orchestrator | 2026-04-04 00:02:29.806396 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-04 00:02:29.806400 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.806404 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.806408 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.806411 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.806415 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.806419 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.806423 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.806426 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.806430 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.806434 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.806438 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.806441 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.806445 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.806449 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.806453 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.806457 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.806460 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.806466 | orchestrator | 2026-04-04 00:02:29.806469 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806473 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.806478 | orchestrator | } 2026-04-04 00:02:29.806484 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806489 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.806495 | orchestrator | } 2026-04-04 00:02:29.806501 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806506 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.806512 | orchestrator | } 2026-04-04 00:02:29.806517 | orchestrator | 2026-04-04 00:02:29.806523 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.806529 | orchestrator | 2026-04-04 00:02:29.806534 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.806540 | orchestrator | + ip_address = "192.168.16.14" 2026-04-04 00:02:29.806547 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.806551 | orchestrator | } 2026-04-04 00:02:29.806555 | orchestrator | } 2026-04-04 00:02:29.806689 | orchestrator | 2026-04-04 00:02:29.806701 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-04 00:02:29.806705 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:29.806709 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.806713 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:29.806717 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:29.806721 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.806725 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:29.806729 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:29.806732 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:29.806736 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:29.806740 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.806744 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:29.806747 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.806751 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:29.806755 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:29.806764 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.806768 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:29.806772 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.806776 | orchestrator | 2026-04-04 00:02:29.806780 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806783 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:29.806787 | orchestrator | } 2026-04-04 00:02:29.806791 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806795 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:29.806830 | orchestrator | } 2026-04-04 00:02:29.806834 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:29.806838 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:29.806842 | orchestrator | } 2026-04-04 00:02:29.806846 | orchestrator | 2026-04-04 00:02:29.806853 | orchestrator | + binding (known after apply) 2026-04-04 00:02:29.806857 | orchestrator | 2026-04-04 00:02:29.806861 | orchestrator | + fixed_ip { 2026-04-04 00:02:29.806865 | orchestrator | + ip_address = "192.168.16.15" 2026-04-04 00:02:29.806869 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.806872 | orchestrator | } 2026-04-04 00:02:29.806876 | orchestrator | } 2026-04-04 00:02:29.806923 | orchestrator | 2026-04-04 00:02:29.806935 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-04 00:02:29.806939 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-04 00:02:29.806943 | orchestrator | + force_destroy = false 2026-04-04 00:02:29.806947 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.806951 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:29.806955 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.806958 | orchestrator | + router_id = (known after apply) 2026-04-04 00:02:29.806962 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:29.806966 | orchestrator | } 2026-04-04 00:02:29.807049 | orchestrator | 2026-04-04 00:02:29.807061 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-04 00:02:29.807065 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-04 00:02:29.807069 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:29.807073 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.807077 | orchestrator | + availability_zone_hints = [ 2026-04-04 00:02:29.807081 | orchestrator | + "nova", 2026-04-04 00:02:29.807084 | orchestrator | ] 2026-04-04 00:02:29.807088 | orchestrator | + distributed = (known after apply) 2026-04-04 00:02:29.807092 | orchestrator | + enable_snat = (known after apply) 2026-04-04 00:02:29.807096 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-04 00:02:29.807100 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-04 00:02:29.807104 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807108 | orchestrator | + name = "testbed" 2026-04-04 00:02:29.807111 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807115 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807119 | orchestrator | 2026-04-04 00:02:29.807123 | orchestrator | + external_fixed_ip (known after apply) 2026-04-04 00:02:29.807127 | orchestrator | } 2026-04-04 00:02:29.807214 | orchestrator | 2026-04-04 00:02:29.807226 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-04 00:02:29.807231 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-04 00:02:29.807235 | orchestrator | + description = "ssh" 2026-04-04 00:02:29.807239 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807243 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807247 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807251 | orchestrator | + port_range_max = 22 2026-04-04 00:02:29.807254 | orchestrator | + port_range_min = 22 2026-04-04 00:02:29.807258 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:29.807262 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807270 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807274 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807278 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.807282 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807286 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807289 | orchestrator | } 2026-04-04 00:02:29.807367 | orchestrator | 2026-04-04 00:02:29.807378 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-04 00:02:29.807383 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-04 00:02:29.807387 | orchestrator | + description = "wireguard" 2026-04-04 00:02:29.807390 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807394 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807398 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807402 | orchestrator | + port_range_max = 51820 2026-04-04 00:02:29.807406 | orchestrator | + port_range_min = 51820 2026-04-04 00:02:29.807410 | orchestrator | + protocol = "udp" 2026-04-04 00:02:29.807413 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807417 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807421 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807425 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.807428 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807432 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807436 | orchestrator | } 2026-04-04 00:02:29.807499 | orchestrator | 2026-04-04 00:02:29.807511 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-04 00:02:29.807515 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-04 00:02:29.807519 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807523 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807527 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807530 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:29.807534 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807538 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807542 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807545 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-04 00:02:29.807549 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807553 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807557 | orchestrator | } 2026-04-04 00:02:29.807615 | orchestrator | 2026-04-04 00:02:29.807626 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-04 00:02:29.807631 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-04 00:02:29.807635 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807638 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807642 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807646 | orchestrator | + protocol = "udp" 2026-04-04 00:02:29.807650 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807653 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807657 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807661 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-04 00:02:29.807665 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807668 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807672 | orchestrator | } 2026-04-04 00:02:29.807731 | orchestrator | 2026-04-04 00:02:29.807742 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-04 00:02:29.807750 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-04 00:02:29.807754 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807758 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807762 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807766 | orchestrator | + protocol = "icmp" 2026-04-04 00:02:29.807769 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807773 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807777 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807781 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.807784 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807788 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807792 | orchestrator | } 2026-04-04 00:02:29.807873 | orchestrator | 2026-04-04 00:02:29.807885 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-04 00:02:29.807889 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-04 00:02:29.807893 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.807897 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.807901 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.807905 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:29.807908 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.807912 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.807919 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.807923 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.807927 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.807931 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.807934 | orchestrator | } 2026-04-04 00:02:29.807993 | orchestrator | 2026-04-04 00:02:29.808005 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-04 00:02:29.808009 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-04 00:02:29.808014 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.808017 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.808021 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808025 | orchestrator | + protocol = "udp" 2026-04-04 00:02:29.808029 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808033 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.808036 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.808040 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.808044 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.808048 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808052 | orchestrator | } 2026-04-04 00:02:29.808114 | orchestrator | 2026-04-04 00:02:29.808126 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-04 00:02:29.808131 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-04 00:02:29.808134 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.808141 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.808145 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808149 | orchestrator | + protocol = "icmp" 2026-04-04 00:02:29.808153 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808156 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.808160 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.808164 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.808168 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.808171 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808179 | orchestrator | } 2026-04-04 00:02:29.808246 | orchestrator | 2026-04-04 00:02:29.808258 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-04 00:02:29.808262 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-04 00:02:29.808266 | orchestrator | + description = "vrrp" 2026-04-04 00:02:29.808270 | orchestrator | + direction = "ingress" 2026-04-04 00:02:29.808274 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:29.808278 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808281 | orchestrator | + protocol = "112" 2026-04-04 00:02:29.808285 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808289 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:29.808293 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:29.808296 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:29.808300 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:29.808304 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808308 | orchestrator | } 2026-04-04 00:02:29.808356 | orchestrator | 2026-04-04 00:02:29.808368 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-04 00:02:29.808372 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-04 00:02:29.808376 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.808380 | orchestrator | + description = "management security group" 2026-04-04 00:02:29.808384 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808388 | orchestrator | + name = "testbed-management" 2026-04-04 00:02:29.808392 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808395 | orchestrator | + stateful = (known after apply) 2026-04-04 00:02:29.808399 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808403 | orchestrator | } 2026-04-04 00:02:29.808450 | orchestrator | 2026-04-04 00:02:29.808462 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-04 00:02:29.808467 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-04 00:02:29.808471 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.808474 | orchestrator | + description = "node security group" 2026-04-04 00:02:29.808478 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808482 | orchestrator | + name = "testbed-node" 2026-04-04 00:02:29.808486 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808489 | orchestrator | + stateful = (known after apply) 2026-04-04 00:02:29.808493 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808497 | orchestrator | } 2026-04-04 00:02:29.808603 | orchestrator | 2026-04-04 00:02:29.808615 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-04 00:02:29.808620 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-04 00:02:29.808624 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:29.808627 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-04 00:02:29.808631 | orchestrator | + dns_nameservers = [ 2026-04-04 00:02:29.808636 | orchestrator | + "8.8.8.8", 2026-04-04 00:02:29.808639 | orchestrator | + "9.9.9.9", 2026-04-04 00:02:29.808643 | orchestrator | ] 2026-04-04 00:02:29.808647 | orchestrator | + enable_dhcp = true 2026-04-04 00:02:29.808651 | orchestrator | + gateway_ip = (known after apply) 2026-04-04 00:02:29.808655 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808659 | orchestrator | + ip_version = 4 2026-04-04 00:02:29.808662 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-04 00:02:29.808666 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-04 00:02:29.808670 | orchestrator | + name = "subnet-testbed-management" 2026-04-04 00:02:29.808674 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:29.808677 | orchestrator | + no_gateway = false 2026-04-04 00:02:29.808681 | orchestrator | + region = (known after apply) 2026-04-04 00:02:29.808685 | orchestrator | + service_types = (known after apply) 2026-04-04 00:02:29.808694 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:29.808698 | orchestrator | 2026-04-04 00:02:29.808702 | orchestrator | + allocation_pool { 2026-04-04 00:02:29.808706 | orchestrator | + end = "192.168.31.250" 2026-04-04 00:02:29.808709 | orchestrator | + start = "192.168.31.200" 2026-04-04 00:02:29.808713 | orchestrator | } 2026-04-04 00:02:29.808717 | orchestrator | } 2026-04-04 00:02:29.808750 | orchestrator | 2026-04-04 00:02:29.808761 | orchestrator | # terraform_data.image will be created 2026-04-04 00:02:29.808766 | orchestrator | + resource "terraform_data" "image" { 2026-04-04 00:02:29.808770 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808773 | orchestrator | + input = "Ubuntu 24.04" 2026-04-04 00:02:29.808777 | orchestrator | + output = (known after apply) 2026-04-04 00:02:29.808781 | orchestrator | } 2026-04-04 00:02:29.808847 | orchestrator | 2026-04-04 00:02:29.808860 | orchestrator | # terraform_data.image_node will be created 2026-04-04 00:02:29.808864 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-04 00:02:29.808868 | orchestrator | + id = (known after apply) 2026-04-04 00:02:29.808872 | orchestrator | + input = "Ubuntu 24.04" 2026-04-04 00:02:29.808875 | orchestrator | + output = (known after apply) 2026-04-04 00:02:29.808879 | orchestrator | } 2026-04-04 00:02:29.808896 | orchestrator | 2026-04-04 00:02:29.808900 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-04 00:02:29.808912 | orchestrator | 2026-04-04 00:02:29.808917 | orchestrator | Changes to Outputs: 2026-04-04 00:02:29.808927 | orchestrator | + manager_address = (sensitive value) 2026-04-04 00:02:29.808931 | orchestrator | + private_key = (sensitive value) 2026-04-04 00:02:29.997774 | orchestrator | terraform_data.image_node: Creating... 2026-04-04 00:02:29.998091 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=5c13c711-e428-bc50-70b2-d6287d91b8d6] 2026-04-04 00:02:29.998267 | orchestrator | terraform_data.image: Creating... 2026-04-04 00:02:29.999309 | orchestrator | terraform_data.image: Creation complete after 0s [id=b709f013-362b-681e-f4e4-8f89fce10089] 2026-04-04 00:02:30.017498 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-04 00:02:30.017603 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-04 00:02:30.029643 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-04 00:02:30.030249 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-04 00:02:30.031436 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-04 00:02:30.034734 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-04 00:02:30.035728 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-04 00:02:30.038266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-04 00:02:30.051303 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-04 00:02:30.053381 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-04 00:02:30.497978 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-04 00:02:30.501056 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-04 00:02:30.506086 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-04 00:02:30.514591 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-04 00:02:31.100653 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c3da1b25-d2b5-4b61-bc49-2dc8ec533eba] 2026-04-04 00:02:31.105952 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-04 00:02:31.230903 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-04 00:02:31.238236 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-04 00:02:33.699774 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=ab9c2046-b8c0-414f-97e1-5f0c3376e903] 2026-04-04 00:02:36.493893 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=1f1f6a26-dade-427f-8374-af0cc4364dc0] 2026-04-04 00:02:36.493989 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a] 2026-04-04 00:02:36.494095 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-04 00:02:36.494116 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-04 00:02:36.494128 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-04 00:02:36.494139 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=fbd8dc74-d964-4e06-8b01-1da5dc54c434] 2026-04-04 00:02:36.494150 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-04 00:02:36.494162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=3b29289e-9d48-43bf-9ccb-2d527cba3b10] 2026-04-04 00:02:36.494174 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=fd41852f-1b07-4466-8009-0d8f18f39338] 2026-04-04 00:02:36.494185 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=0bfc49b0-6c75-49d4-a01c-0507cea22dca] 2026-04-04 00:02:36.494195 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-04 00:02:36.494207 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-04 00:02:36.494217 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-04 00:02:36.494229 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=3688be93-9535-40e0-bcab-38dca1989364] 2026-04-04 00:02:36.494240 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=c11eb6c9-bfbf-4293-bc40-9ec52317ad2c] 2026-04-04 00:02:36.494251 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-04 00:02:36.494262 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-04 00:02:36.494273 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=be914db7-2007-4c76-8182-cad35e2f72bf] 2026-04-04 00:02:36.494285 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e93b2576-7410-4b5a-afb0-b95db9925720] 2026-04-04 00:02:36.494296 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-04 00:02:36.622074 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 3s [id=20dbfc4fafc9ac5acc6d1b355293ed5e81dffaa6] 2026-04-04 00:02:36.624327 | orchestrator | local_file.id_rsa_pub: Creation complete after 3s [id=e2519e81aab9c034f9c5c4996ade28d68b56ab28] 2026-04-04 00:02:37.158505 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=a02f1b50-e748-4ffa-92fc-a34c46f12dd0] 2026-04-04 00:02:37.191562 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=cd83c7e7-5e97-436e-8e2f-fc883acebe13] 2026-04-04 00:02:37.201462 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=4c9340f8-6bc1-41cf-8ec5-49feac56714d] 2026-04-04 00:02:37.252704 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=dc287254-001b-4450-afd2-9bec2027ae79] 2026-04-04 00:02:37.270063 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2edc74eb-d496-4371-809c-e00c1f1a3999] 2026-04-04 00:02:37.270849 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=863798db-c475-4907-865f-d751361d3bd3] 2026-04-04 00:02:38.377799 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=90c6eb67-2374-4717-94e0-bf1f3d7ced10] 2026-04-04 00:02:38.384689 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-04 00:02:38.385473 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-04 00:02:38.386737 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-04 00:02:38.606757 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=82fa730b-a95f-441b-9baa-3ebea2ebfdaa] 2026-04-04 00:02:38.616687 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-04 00:02:38.621444 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-04 00:02:38.621521 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-04 00:02:38.621759 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-04 00:02:38.622834 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-04 00:02:38.626592 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-04 00:02:38.817051 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=5373ab94-ac3c-4756-9e79-8d40856eaa13] 2026-04-04 00:02:39.017325 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=5de3dac7-b473-45d1-b539-40b3a59957ff] 2026-04-04 00:02:39.096793 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=290f09a4-d8c1-4a33-bd5d-3d1ba19156ce] 2026-04-04 00:02:39.112948 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-04 00:02:39.113022 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-04 00:02:39.115058 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-04 00:02:39.116190 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-04 00:02:39.121205 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-04 00:02:39.170425 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=8198186a-22d2-488f-9251-8edce5d4aef5] 2026-04-04 00:02:39.181441 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-04 00:02:39.346005 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a5da15e1-7f55-477a-8d0a-634f1b9ef004] 2026-04-04 00:02:39.361319 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=431edd44-3395-4faf-8aa2-0f283b3e70e1] 2026-04-04 00:02:39.363796 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-04 00:02:39.378123 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-04 00:02:39.859810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=faf8db6b-041d-404e-80f5-0162ccc748ec] 2026-04-04 00:02:39.874881 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-04 00:02:39.985611 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=56bab0fb-99e7-4848-8cde-3bde9548a17c] 2026-04-04 00:02:39.997385 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-04 00:02:40.302586 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=e1791048-c38c-471a-9496-6ca34b84425c] 2026-04-04 00:02:40.342781 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=61634538-d94c-498b-b7e1-8a9792c9afb6] 2026-04-04 00:02:40.387910 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=89781c4d-21d7-41c6-a8bf-c34f4b2147d3] 2026-04-04 00:02:40.500146 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=2e7338c2-408c-4b35-a240-74ba095e7368] 2026-04-04 00:02:40.909307 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=b8b8cdd2-f1fe-44b7-a8a1-79122429115d] 2026-04-04 00:02:41.447082 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=e887b996-5922-47c9-8361-a1111d157694] 2026-04-04 00:02:41.564170 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 3s [id=9b7e1f8d-ac7a-45ef-9f93-16c7e33bd25c] 2026-04-04 00:02:41.722771 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=82157739-a4c7-48f3-bf06-01714e43fe47] 2026-04-04 00:02:41.753554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=881cb2b0-1491-4155-bbfe-59930bb07b6e] 2026-04-04 00:02:42.385971 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=88147c0d-3221-48d7-9624-4c0e31320330] 2026-04-04 00:02:42.413549 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-04 00:02:42.422975 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-04 00:02:42.423084 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-04 00:02:42.427822 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-04 00:02:42.436846 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-04 00:02:42.437262 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-04 00:02:42.437642 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-04 00:02:45.505731 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 4s [id=b5f45c3b-7ea5-4179-b5db-85dcc2023498] 2026-04-04 00:02:45.515764 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-04 00:02:45.521917 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-04 00:02:45.523588 | orchestrator | local_file.inventory: Creating... 2026-04-04 00:02:45.526473 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=288208ec366d024343ef3d63e3fc9fead89f9970] 2026-04-04 00:02:45.529012 | orchestrator | local_file.inventory: Creation complete after 0s [id=ff76510ae8301715ee1aa9e0334b691ccdd34934] 2026-04-04 00:02:46.751095 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b5f45c3b-7ea5-4179-b5db-85dcc2023498] 2026-04-04 00:02:52.426414 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-04 00:02:52.428219 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-04 00:02:52.431705 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-04 00:02:52.441035 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-04 00:02:52.441086 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-04 00:02:52.441102 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-04 00:03:02.435148 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-04 00:03:02.435251 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-04 00:03:02.435262 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-04 00:03:02.441682 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-04 00:03:02.441780 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-04 00:03:02.441796 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-04 00:03:12.443438 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-04 00:03:12.443549 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-04 00:03:12.443562 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-04 00:03:12.443571 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-04 00:03:12.443580 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-04 00:03:12.443589 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-04 00:03:22.451564 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-04 00:03:22.451684 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-04 00:03:22.451713 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-04 00:03:22.451725 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-04 00:03:22.451737 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-04 00:03:22.451748 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-04 00:03:23.532566 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=5325beb4-e329-4aee-a908-ddadad4f7f99] 2026-04-04 00:03:32.460102 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-04 00:03:32.460209 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-04 00:03:32.460223 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-04 00:03:32.460235 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-04 00:03:32.460257 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-04 00:03:33.867689 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 52s [id=858bf7f5-6fda-441f-b5f8-4f65234f69aa] 2026-04-04 00:03:34.074344 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 52s [id=08b88af9-8c92-4431-bae3-4aed6667348a] 2026-04-04 00:03:42.460372 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-04-04 00:03:42.460479 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-04-04 00:03:42.460508 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-04-04 00:03:52.468235 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-04-04 00:03:52.468355 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m10s elapsed] 2026-04-04 00:03:52.468368 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m10s elapsed] 2026-04-04 00:03:54.770902 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m13s [id=d0eb72c9-2da1-4bfa-91e6-ba036aca8412] 2026-04-04 00:04:02.475866 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m20s elapsed] 2026-04-04 00:04:02.475999 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m20s elapsed] 2026-04-04 00:04:03.789217 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m22s [id=2efeb307-f737-4fa9-82c6-5ef3bfd5f2ce] 2026-04-04 00:04:03.881701 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m22s [id=43e5b06e-c7e7-49f0-bec4-0f93071610c7] 2026-04-04 00:04:03.903971 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-04 00:04:03.906350 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-04 00:04:03.909587 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-04 00:04:03.909664 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-04 00:04:03.911534 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=333979593902601492] 2026-04-04 00:04:03.912794 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-04 00:04:03.919748 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-04 00:04:03.927819 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-04 00:04:03.928121 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-04 00:04:03.931389 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-04 00:04:03.939352 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-04 00:04:03.946068 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-04 00:04:07.348652 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=43e5b06e-c7e7-49f0-bec4-0f93071610c7/fd41852f-1b07-4466-8009-0d8f18f39338] 2026-04-04 00:04:07.356985 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=5325beb4-e329-4aee-a908-ddadad4f7f99/ab9c2046-b8c0-414f-97e1-5f0c3376e903] 2026-04-04 00:04:07.380164 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=2efeb307-f737-4fa9-82c6-5ef3bfd5f2ce/1f1f6a26-dade-427f-8374-af0cc4364dc0] 2026-04-04 00:04:07.397965 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=5325beb4-e329-4aee-a908-ddadad4f7f99/3b29289e-9d48-43bf-9ccb-2d527cba3b10] 2026-04-04 00:04:07.402430 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=43e5b06e-c7e7-49f0-bec4-0f93071610c7/0bfc49b0-6c75-49d4-a01c-0507cea22dca] 2026-04-04 00:04:07.419022 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=2efeb307-f737-4fa9-82c6-5ef3bfd5f2ce/3688be93-9535-40e0-bcab-38dca1989364] 2026-04-04 00:04:13.495902 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=5325beb4-e329-4aee-a908-ddadad4f7f99/c11eb6c9-bfbf-4293-bc40-9ec52317ad2c] 2026-04-04 00:04:13.503237 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=43e5b06e-c7e7-49f0-bec4-0f93071610c7/3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a] 2026-04-04 00:04:13.527676 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=2efeb307-f737-4fa9-82c6-5ef3bfd5f2ce/fbd8dc74-d964-4e06-8b01-1da5dc54c434] 2026-04-04 00:04:13.952510 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-04 00:04:23.958458 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-04 00:04:24.524017 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=4361cda3-c74d-4fb4-b8d3-1acdbe8ba486] 2026-04-04 00:04:24.839211 | orchestrator | 2026-04-04 00:04:24.839301 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-04 00:04:24.839314 | orchestrator | 2026-04-04 00:04:24.839320 | orchestrator | Outputs: 2026-04-04 00:04:24.839327 | orchestrator | 2026-04-04 00:04:24.839334 | orchestrator | manager_address = 2026-04-04 00:04:24.839340 | orchestrator | private_key = 2026-04-04 00:04:25.005063 | orchestrator | ok: Runtime: 0:01:59.503161 2026-04-04 00:04:25.033176 | 2026-04-04 00:04:25.033323 | TASK [Create infrastructure (stable)] 2026-04-04 00:04:25.568001 | orchestrator | skipping: Conditional result was False 2026-04-04 00:04:25.583694 | 2026-04-04 00:04:25.583845 | TASK [Fetch manager address] 2026-04-04 00:04:26.079383 | orchestrator | ok 2026-04-04 00:04:26.088843 | 2026-04-04 00:04:26.088988 | TASK [Set manager_host address] 2026-04-04 00:04:26.159799 | orchestrator | ok 2026-04-04 00:04:26.170356 | 2026-04-04 00:04:26.170491 | LOOP [Update ansible collections] 2026-04-04 00:04:27.695993 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:04:27.696458 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-04 00:04:27.696536 | orchestrator | Starting galaxy collection install process 2026-04-04 00:04:27.696587 | orchestrator | Process install dependency map 2026-04-04 00:04:27.696631 | orchestrator | Starting collection install process 2026-04-04 00:04:27.696675 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-04-04 00:04:27.696745 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-04-04 00:04:27.696809 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-04 00:04:27.696968 | orchestrator | ok: Item: commons Runtime: 0:00:01.149458 2026-04-04 00:04:28.885283 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:04:28.885425 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-04 00:04:28.885461 | orchestrator | Starting galaxy collection install process 2026-04-04 00:04:28.885488 | orchestrator | Process install dependency map 2026-04-04 00:04:28.885511 | orchestrator | Starting collection install process 2026-04-04 00:04:28.885532 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-04-04 00:04:28.885554 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-04-04 00:04:28.885574 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-04 00:04:28.885608 | orchestrator | ok: Item: services Runtime: 0:00:00.879912 2026-04-04 00:04:28.897565 | 2026-04-04 00:04:28.897699 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-04 00:04:39.489380 | orchestrator | ok 2026-04-04 00:04:39.500134 | 2026-04-04 00:04:39.500270 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-04 00:05:39.551583 | orchestrator | ok 2026-04-04 00:05:39.561031 | 2026-04-04 00:05:39.561155 | TASK [Fetch manager ssh hostkey] 2026-04-04 00:05:41.141733 | orchestrator | Output suppressed because no_log was given 2026-04-04 00:05:41.156978 | 2026-04-04 00:05:41.157157 | TASK [Get ssh keypair from terraform environment] 2026-04-04 00:05:41.693272 | orchestrator | ok: Runtime: 0:00:00.007463 2026-04-04 00:05:41.709359 | 2026-04-04 00:05:41.709525 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-04 00:05:41.759283 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-04 00:05:41.769517 | 2026-04-04 00:05:41.769669 | TASK [Run manager part 0] 2026-04-04 00:05:42.961851 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:05:43.016114 | orchestrator | 2026-04-04 00:05:43.016191 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-04 00:05:43.016204 | orchestrator | 2026-04-04 00:05:43.016221 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-04 00:05:44.848593 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:44.848654 | orchestrator | 2026-04-04 00:05:44.848686 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-04 00:05:44.848700 | orchestrator | 2026-04-04 00:05:44.848714 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:05:46.661907 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:46.661962 | orchestrator | 2026-04-04 00:05:46.661969 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-04 00:05:47.352600 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:47.352681 | orchestrator | 2026-04-04 00:05:47.352689 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-04 00:05:47.401248 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:47.401313 | orchestrator | 2026-04-04 00:05:47.401328 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-04 00:05:47.435414 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:47.435482 | orchestrator | 2026-04-04 00:05:47.435496 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-04 00:05:47.478270 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:47.478330 | orchestrator | 2026-04-04 00:05:47.478337 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-04 00:05:48.188360 | orchestrator | changed: [testbed-manager] 2026-04-04 00:05:48.188430 | orchestrator | 2026-04-04 00:05:48.188442 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-04 00:09:00.342099 | orchestrator | changed: [testbed-manager] 2026-04-04 00:09:00.342171 | orchestrator | 2026-04-04 00:09:00.342189 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-04 00:10:15.931053 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:15.931100 | orchestrator | 2026-04-04 00:10:15.931111 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-04 00:10:37.506519 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:37.506801 | orchestrator | 2026-04-04 00:10:37.506836 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-04 00:10:45.973754 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:45.973851 | orchestrator | 2026-04-04 00:10:45.973867 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-04 00:10:46.036267 | orchestrator | ok: [testbed-manager] 2026-04-04 00:10:46.036462 | orchestrator | 2026-04-04 00:10:46.036482 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-04 00:10:46.870721 | orchestrator | ok: [testbed-manager] 2026-04-04 00:10:46.870825 | orchestrator | 2026-04-04 00:10:46.870854 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-04 00:10:47.602441 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:47.602518 | orchestrator | 2026-04-04 00:10:47.602534 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-04 00:10:53.876017 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:53.876085 | orchestrator | 2026-04-04 00:10:53.876108 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-04 00:10:59.726270 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:59.726313 | orchestrator | 2026-04-04 00:10:59.726319 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-04 00:11:02.407700 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:02.407774 | orchestrator | 2026-04-04 00:11:02.407782 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-04 00:11:04.067615 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:04.067696 | orchestrator | 2026-04-04 00:11:04.067711 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-04 00:11:05.097657 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-04 00:11:05.097783 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-04 00:11:05.097811 | orchestrator | 2026-04-04 00:11:05.097836 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-04 00:11:05.141948 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-04 00:11:05.142053 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-04 00:11:05.142069 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-04 00:11:05.142082 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-04 00:11:08.335448 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-04 00:11:08.335486 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-04 00:11:08.335492 | orchestrator | 2026-04-04 00:11:08.335497 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-04 00:11:08.882209 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:08.882298 | orchestrator | 2026-04-04 00:11:08.882340 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-04 00:13:29.825284 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-04 00:13:29.825382 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-04 00:13:29.825399 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-04 00:13:29.825415 | orchestrator | 2026-04-04 00:13:29.825437 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-04 00:13:32.120329 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-04 00:13:32.120448 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-04 00:13:32.120476 | orchestrator | 2026-04-04 00:13:32.120498 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-04 00:13:32.120519 | orchestrator | 2026-04-04 00:13:32.120537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:13:33.469766 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:33.469845 | orchestrator | 2026-04-04 00:13:33.469859 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-04 00:13:33.512985 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:33.513040 | orchestrator | 2026-04-04 00:13:33.513046 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-04 00:13:33.566641 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:33.566701 | orchestrator | 2026-04-04 00:13:33.566708 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-04 00:13:34.300285 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:34.300636 | orchestrator | 2026-04-04 00:13:34.300671 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-04 00:13:34.980178 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:34.980254 | orchestrator | 2026-04-04 00:13:34.980267 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-04 00:13:36.308387 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-04 00:13:36.308489 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-04 00:13:36.308517 | orchestrator | 2026-04-04 00:13:36.308538 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-04 00:13:37.653361 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:37.653403 | orchestrator | 2026-04-04 00:13:37.653408 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-04 00:13:39.330646 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:13:39.330688 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-04 00:13:39.330700 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:13:39.330705 | orchestrator | 2026-04-04 00:13:39.330711 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-04 00:13:39.381648 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:39.381706 | orchestrator | 2026-04-04 00:13:39.381718 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-04 00:13:39.449721 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:39.449759 | orchestrator | 2026-04-04 00:13:39.449882 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-04 00:13:40.169626 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:40.169670 | orchestrator | 2026-04-04 00:13:40.169679 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-04 00:13:40.235554 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:40.235605 | orchestrator | 2026-04-04 00:13:40.235612 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-04 00:13:41.094450 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:13:41.094515 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:41.094526 | orchestrator | 2026-04-04 00:13:41.094534 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-04 00:13:41.128575 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:41.128629 | orchestrator | 2026-04-04 00:13:41.128637 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-04 00:13:41.160940 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:41.160998 | orchestrator | 2026-04-04 00:13:41.161007 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-04 00:13:41.199648 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:41.199707 | orchestrator | 2026-04-04 00:13:41.199717 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-04 00:13:41.269054 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:41.269098 | orchestrator | 2026-04-04 00:13:41.269104 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-04 00:13:41.983795 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:41.983853 | orchestrator | 2026-04-04 00:13:41.983861 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-04 00:13:41.983868 | orchestrator | 2026-04-04 00:13:41.983876 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:13:43.375969 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:43.376059 | orchestrator | 2026-04-04 00:13:43.376074 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-04 00:13:44.319159 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:44.319241 | orchestrator | 2026-04-04 00:13:44.319256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:13:44.319270 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-04 00:13:44.319282 | orchestrator | 2026-04-04 00:13:44.617870 | orchestrator | ok: Runtime: 0:08:02.236847 2026-04-04 00:13:44.636521 | 2026-04-04 00:13:44.636754 | TASK [Point out that the log in on the manager is now possible] 2026-04-04 00:13:44.688371 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-04 00:13:44.698930 | 2026-04-04 00:13:44.699072 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-04 00:13:44.750205 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-04 00:13:44.760591 | 2026-04-04 00:13:44.760739 | TASK [Run manager part 1 + 2] 2026-04-04 00:13:45.635077 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:13:45.693402 | orchestrator | 2026-04-04 00:13:45.693451 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-04 00:13:45.693459 | orchestrator | 2026-04-04 00:13:45.693472 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:13:48.521026 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:48.521072 | orchestrator | 2026-04-04 00:13:48.521095 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-04 00:13:48.562743 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:48.562800 | orchestrator | 2026-04-04 00:13:48.562810 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-04 00:13:48.614350 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:48.614407 | orchestrator | 2026-04-04 00:13:48.614418 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:13:48.656619 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:48.656672 | orchestrator | 2026-04-04 00:13:48.656685 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:13:48.728964 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:48.729015 | orchestrator | 2026-04-04 00:13:48.729023 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:13:48.798318 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:48.798370 | orchestrator | 2026-04-04 00:13:48.798378 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:13:48.848382 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-04 00:13:48.848430 | orchestrator | 2026-04-04 00:13:48.848435 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:13:49.555902 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:49.555967 | orchestrator | 2026-04-04 00:13:49.555981 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:13:49.605968 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:13:49.606143 | orchestrator | 2026-04-04 00:13:49.606153 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:13:50.981079 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:50.981210 | orchestrator | 2026-04-04 00:13:50.981226 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:13:51.566210 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:51.566402 | orchestrator | 2026-04-04 00:13:51.566422 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:13:52.693311 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:52.693361 | orchestrator | 2026-04-04 00:13:52.693369 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:14:07.612670 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:07.612712 | orchestrator | 2026-04-04 00:14:07.612718 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-04 00:14:08.278628 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:08.278738 | orchestrator | 2026-04-04 00:14:08.278757 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-04 00:14:08.376407 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:14:08.376499 | orchestrator | 2026-04-04 00:14:08.376515 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-04 00:14:09.272411 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:09.272491 | orchestrator | 2026-04-04 00:14:09.272505 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-04 00:14:10.205098 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:10.205163 | orchestrator | 2026-04-04 00:14:10.205172 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-04 00:14:10.744750 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:10.744839 | orchestrator | 2026-04-04 00:14:10.744856 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-04 00:14:10.791880 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-04 00:14:10.792000 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-04 00:14:10.792019 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-04 00:14:10.792033 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-04 00:14:12.857018 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:12.857062 | orchestrator | 2026-04-04 00:14:12.857069 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-04 00:14:21.337237 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-04 00:14:21.337351 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-04 00:14:21.337380 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-04 00:14:21.337401 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-04 00:14:21.337432 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-04 00:14:21.337452 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-04 00:14:21.337472 | orchestrator | 2026-04-04 00:14:21.337494 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-04 00:14:22.360349 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:22.360432 | orchestrator | 2026-04-04 00:14:22.360447 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-04 00:14:25.299717 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:25.299825 | orchestrator | 2026-04-04 00:14:25.299850 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-04 00:14:25.335447 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:14:25.335530 | orchestrator | 2026-04-04 00:14:25.335543 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-04 00:15:57.292644 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:57.292740 | orchestrator | 2026-04-04 00:15:57.292759 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:15:58.440269 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:58.440352 | orchestrator | 2026-04-04 00:15:58.440371 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:15:58.440384 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-04 00:15:58.440396 | orchestrator | 2026-04-04 00:15:58.930364 | orchestrator | ok: Runtime: 0:02:13.437952 2026-04-04 00:15:58.948495 | 2026-04-04 00:15:58.948681 | TASK [Reboot manager] 2026-04-04 00:16:00.491425 | orchestrator | ok: Runtime: 0:00:00.908278 2026-04-04 00:16:00.507441 | 2026-04-04 00:16:00.507641 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-04 00:16:14.466463 | orchestrator | ok 2026-04-04 00:16:14.477972 | 2026-04-04 00:16:14.478125 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-04 00:17:14.536318 | orchestrator | ok 2026-04-04 00:17:14.547109 | 2026-04-04 00:17:14.547265 | TASK [Deploy manager + bootstrap nodes] 2026-04-04 00:17:16.973887 | orchestrator | 2026-04-04 00:17:16.974100 | orchestrator | # DEPLOY MANAGER 2026-04-04 00:17:16.974119 | orchestrator | 2026-04-04 00:17:16.974128 | orchestrator | + set -e 2026-04-04 00:17:16.974135 | orchestrator | + echo 2026-04-04 00:17:16.974143 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-04 00:17:16.974153 | orchestrator | + echo 2026-04-04 00:17:16.974183 | orchestrator | + cat /opt/manager-vars.sh 2026-04-04 00:17:16.977478 | orchestrator | export NUMBER_OF_NODES=6 2026-04-04 00:17:16.977508 | orchestrator | 2026-04-04 00:17:16.977520 | orchestrator | export CEPH_VERSION=reef 2026-04-04 00:17:16.977533 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-04 00:17:16.977545 | orchestrator | export MANAGER_VERSION=latest 2026-04-04 00:17:16.977569 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-04-04 00:17:16.977580 | orchestrator | 2026-04-04 00:17:16.977598 | orchestrator | export ARA=false 2026-04-04 00:17:16.977610 | orchestrator | export DEPLOY_MODE=manager 2026-04-04 00:17:16.977627 | orchestrator | export TEMPEST=true 2026-04-04 00:17:16.977639 | orchestrator | export IS_ZUUL=true 2026-04-04 00:17:16.977650 | orchestrator | 2026-04-04 00:17:16.977668 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:17:16.977679 | orchestrator | export EXTERNAL_API=false 2026-04-04 00:17:16.977690 | orchestrator | 2026-04-04 00:17:16.977701 | orchestrator | export IMAGE_USER=ubuntu 2026-04-04 00:17:16.977716 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-04 00:17:16.977727 | orchestrator | 2026-04-04 00:17:16.977738 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-04 00:17:16.977755 | orchestrator | 2026-04-04 00:17:16.977766 | orchestrator | + echo 2026-04-04 00:17:16.977778 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:17:16.978573 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:17:16.978594 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:17:16.978608 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:17:16.978620 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:17:16.978772 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:17:16.978788 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:17:16.978799 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:17:16.978809 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:17:16.978820 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:17:16.978831 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:17:16.978843 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:17:16.978854 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:17:16.978865 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:17:16.978876 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 00:17:16.978896 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 00:17:16.978908 | orchestrator | ++ export ARA=false 2026-04-04 00:17:16.978919 | orchestrator | ++ ARA=false 2026-04-04 00:17:16.978930 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:17:16.978940 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:17:16.978955 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:17:16.978967 | orchestrator | ++ TEMPEST=true 2026-04-04 00:17:16.978978 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:17:16.978988 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:17:16.978999 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:17:16.979010 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:17:16.979021 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:17:16.979032 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:17:16.979043 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:17:16.979053 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:17:16.979065 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:17:16.979076 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:17:16.979086 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:17:16.979098 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:17:16.979112 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-04 00:17:17.032461 | orchestrator | + docker version 2026-04-04 00:17:17.152015 | orchestrator | Client: Docker Engine - Community 2026-04-04 00:17:17.152122 | orchestrator | Version: 27.5.1 2026-04-04 00:17:17.152138 | orchestrator | API version: 1.47 2026-04-04 00:17:17.152151 | orchestrator | Go version: go1.22.11 2026-04-04 00:17:17.152162 | orchestrator | Git commit: 9f9e405 2026-04-04 00:17:17.152174 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-04 00:17:17.152186 | orchestrator | OS/Arch: linux/amd64 2026-04-04 00:17:17.152197 | orchestrator | Context: default 2026-04-04 00:17:17.152207 | orchestrator | 2026-04-04 00:17:17.152219 | orchestrator | Server: Docker Engine - Community 2026-04-04 00:17:17.152275 | orchestrator | Engine: 2026-04-04 00:17:17.152288 | orchestrator | Version: 27.5.1 2026-04-04 00:17:17.152299 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-04 00:17:17.152340 | orchestrator | Go version: go1.22.11 2026-04-04 00:17:17.152352 | orchestrator | Git commit: 4c9b3b0 2026-04-04 00:17:17.152364 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-04 00:17:17.152374 | orchestrator | OS/Arch: linux/amd64 2026-04-04 00:17:17.152385 | orchestrator | Experimental: false 2026-04-04 00:17:17.152396 | orchestrator | containerd: 2026-04-04 00:17:17.152407 | orchestrator | Version: v2.2.2 2026-04-04 00:17:17.152418 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-04 00:17:17.152429 | orchestrator | runc: 2026-04-04 00:17:17.152440 | orchestrator | Version: 1.3.4 2026-04-04 00:17:17.152452 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-04 00:17:17.152463 | orchestrator | docker-init: 2026-04-04 00:17:17.152474 | orchestrator | Version: 0.19.0 2026-04-04 00:17:17.152485 | orchestrator | GitCommit: de40ad0 2026-04-04 00:17:17.153541 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-04 00:17:17.161812 | orchestrator | + set -e 2026-04-04 00:17:17.161864 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:17:17.161870 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:17:17.161876 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:17:17.161880 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:17:17.161884 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:17:17.161887 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:17:17.161892 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:17:17.161896 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:17:17.161901 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:17:17.161904 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 00:17:17.161908 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 00:17:17.161912 | orchestrator | ++ export ARA=false 2026-04-04 00:17:17.161916 | orchestrator | ++ ARA=false 2026-04-04 00:17:17.161920 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:17:17.161924 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:17:17.161928 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:17:17.161931 | orchestrator | ++ TEMPEST=true 2026-04-04 00:17:17.161935 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:17:17.161939 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:17:17.161942 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:17:17.161946 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:17:17.161950 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:17:17.161954 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:17:17.161958 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:17:17.161961 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:17:17.161965 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:17:17.161969 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:17:17.161973 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:17:17.161977 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:17:17.161986 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:17:17.161990 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:17:17.161994 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:17:17.161997 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:17:17.162004 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:17:17.162008 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:17:17.162012 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:17:17.162042 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-04 00:17:17.169017 | orchestrator | + set -e 2026-04-04 00:17:17.169067 | orchestrator | + VERSION=reef 2026-04-04 00:17:17.169828 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:17:17.175792 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-04 00:17:17.175808 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:17:17.181025 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-04-04 00:17:17.187136 | orchestrator | + set -e 2026-04-04 00:17:17.187160 | orchestrator | + VERSION=2025.1 2026-04-04 00:17:17.188075 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:17:17.191622 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-04 00:17:17.191643 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:17:17.196069 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-04 00:17:17.196873 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:17:17.251520 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:17:17.251620 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:17:17.251635 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-04 00:17:17.252317 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 00:17:17.309224 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:17:17.309362 | orchestrator | ++ semver 2025.1 2025.1 2026-04-04 00:17:17.389622 | orchestrator | + [[ 0 -ge 0 ]] 2026-04-04 00:17:17.389721 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-04 00:17:17.395934 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-04 00:17:17.400461 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-04 00:17:17.493394 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-04 00:17:17.494210 | orchestrator | + source /opt/venv/bin/activate 2026-04-04 00:17:17.496532 | orchestrator | ++ deactivate nondestructive 2026-04-04 00:17:17.496558 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:17:17.496569 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:17:17.496580 | orchestrator | ++ hash -r 2026-04-04 00:17:17.496591 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:17:17.496602 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-04 00:17:17.496613 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-04 00:17:17.496624 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-04 00:17:17.496635 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-04 00:17:17.496646 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-04 00:17:17.496657 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-04 00:17:17.496668 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-04 00:17:17.496680 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:17:17.496712 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:17:17.496724 | orchestrator | ++ export PATH 2026-04-04 00:17:17.496735 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:17:17.496746 | orchestrator | ++ '[' -z '' ']' 2026-04-04 00:17:17.496757 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-04 00:17:17.496768 | orchestrator | ++ PS1='(venv) ' 2026-04-04 00:17:17.496780 | orchestrator | ++ export PS1 2026-04-04 00:17:17.496792 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-04 00:17:17.496803 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-04 00:17:17.496813 | orchestrator | ++ hash -r 2026-04-04 00:17:17.496825 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-04 00:17:18.628815 | orchestrator | 2026-04-04 00:17:18.628923 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-04 00:17:18.628939 | orchestrator | 2026-04-04 00:17:18.628952 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:17:19.185131 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:19.185212 | orchestrator | 2026-04-04 00:17:19.185220 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-04 00:17:20.176699 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:20.176776 | orchestrator | 2026-04-04 00:17:20.176786 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-04 00:17:20.176793 | orchestrator | 2026-04-04 00:17:20.176798 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:17:23.545184 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:23.545329 | orchestrator | 2026-04-04 00:17:23.545349 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-04 00:17:23.598488 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:23.598581 | orchestrator | 2026-04-04 00:17:23.598596 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-04 00:17:24.038001 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:24.038152 | orchestrator | 2026-04-04 00:17:24.038169 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-04 00:17:24.079515 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:17:24.079600 | orchestrator | 2026-04-04 00:17:24.079614 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-04 00:17:24.427737 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:24.427867 | orchestrator | 2026-04-04 00:17:24.427884 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-04 00:17:24.763388 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:24.763494 | orchestrator | 2026-04-04 00:17:24.763512 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-04 00:17:24.878801 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:17:24.878915 | orchestrator | 2026-04-04 00:17:24.878940 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-04 00:17:24.878953 | orchestrator | 2026-04-04 00:17:24.878964 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:17:26.610778 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:26.610883 | orchestrator | 2026-04-04 00:17:26.610901 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-04 00:17:26.713629 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-04 00:17:26.713724 | orchestrator | 2026-04-04 00:17:26.713738 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-04 00:17:26.780912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-04 00:17:26.781003 | orchestrator | 2026-04-04 00:17:26.781018 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-04 00:17:27.878679 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-04 00:17:27.878780 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-04 00:17:27.878799 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-04 00:17:27.878814 | orchestrator | 2026-04-04 00:17:27.878829 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-04 00:17:29.589941 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-04 00:17:29.590154 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-04 00:17:29.590184 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-04 00:17:29.590206 | orchestrator | 2026-04-04 00:17:29.590228 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-04 00:17:30.145980 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:17:30.146118 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:30.146134 | orchestrator | 2026-04-04 00:17:30.146146 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-04 00:17:30.683369 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:17:30.683469 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:30.683485 | orchestrator | 2026-04-04 00:17:30.683498 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-04 00:17:30.731461 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:17:30.731543 | orchestrator | 2026-04-04 00:17:30.731554 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-04 00:17:31.061061 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:31.061159 | orchestrator | 2026-04-04 00:17:31.061174 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-04 00:17:31.120775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-04 00:17:31.120868 | orchestrator | 2026-04-04 00:17:31.120906 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-04 00:17:32.141632 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:32.141697 | orchestrator | 2026-04-04 00:17:32.141703 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-04 00:17:32.924993 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:32.925107 | orchestrator | 2026-04-04 00:17:32.925124 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-04 00:17:50.707124 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:50.707286 | orchestrator | 2026-04-04 00:17:50.707308 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-04 00:17:50.769036 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:17:50.769129 | orchestrator | 2026-04-04 00:17:50.769144 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-04 00:17:50.769186 | orchestrator | 2026-04-04 00:17:50.769198 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:17:52.616553 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:52.616655 | orchestrator | 2026-04-04 00:17:52.616673 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-04 00:17:52.718700 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-04 00:17:52.718797 | orchestrator | 2026-04-04 00:17:52.718812 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-04 00:17:52.766216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:17:52.766355 | orchestrator | 2026-04-04 00:17:52.766373 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-04 00:17:54.752409 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:54.752537 | orchestrator | 2026-04-04 00:17:54.752570 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-04 00:17:54.801350 | orchestrator | ok: [testbed-manager] 2026-04-04 00:17:54.801420 | orchestrator | 2026-04-04 00:17:54.801426 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-04 00:17:54.912805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-04 00:17:54.912891 | orchestrator | 2026-04-04 00:17:54.912903 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-04 00:17:57.437806 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-04 00:17:57.437917 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-04 00:17:57.437933 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-04 00:17:57.437946 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-04 00:17:57.437957 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-04 00:17:57.437969 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-04 00:17:57.437980 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-04 00:17:57.437991 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-04 00:17:57.438002 | orchestrator | 2026-04-04 00:17:57.438072 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-04 00:17:58.010667 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:58.010768 | orchestrator | 2026-04-04 00:17:58.010784 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-04 00:17:58.608709 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:58.608811 | orchestrator | 2026-04-04 00:17:58.608827 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-04 00:17:58.670661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-04 00:17:58.670775 | orchestrator | 2026-04-04 00:17:58.670799 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-04 00:17:59.726079 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-04 00:17:59.726189 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-04 00:17:59.726208 | orchestrator | 2026-04-04 00:17:59.726221 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-04 00:18:00.240640 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:00.240757 | orchestrator | 2026-04-04 00:18:00.240782 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-04 00:18:00.284668 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:18:00.284741 | orchestrator | 2026-04-04 00:18:00.284750 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-04 00:18:00.352556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-04 00:18:00.352646 | orchestrator | 2026-04-04 00:18:00.352660 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-04 00:18:00.884111 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:00.884198 | orchestrator | 2026-04-04 00:18:00.884208 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-04 00:18:00.937609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-04 00:18:00.937720 | orchestrator | 2026-04-04 00:18:00.937745 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-04 00:18:02.037804 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:18:02.037901 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:18:02.037915 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:02.037927 | orchestrator | 2026-04-04 00:18:02.037938 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-04 00:18:02.595904 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:02.596002 | orchestrator | 2026-04-04 00:18:02.596019 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-04 00:18:02.651980 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:18:02.652083 | orchestrator | 2026-04-04 00:18:02.652101 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-04 00:18:02.726890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-04 00:18:02.726988 | orchestrator | 2026-04-04 00:18:02.727003 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-04 00:18:03.200199 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:03.200350 | orchestrator | 2026-04-04 00:18:03.200363 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-04 00:18:03.561880 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:03.561979 | orchestrator | 2026-04-04 00:18:03.561995 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-04 00:18:04.726108 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-04 00:18:04.727020 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-04 00:18:04.727056 | orchestrator | 2026-04-04 00:18:04.727070 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-04 00:18:05.381884 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:05.381988 | orchestrator | 2026-04-04 00:18:05.382005 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-04 00:18:05.742482 | orchestrator | ok: [testbed-manager] 2026-04-04 00:18:05.742571 | orchestrator | 2026-04-04 00:18:05.742584 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-04 00:18:06.083877 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:06.083980 | orchestrator | 2026-04-04 00:18:06.083996 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-04 00:18:06.120211 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:18:06.120343 | orchestrator | 2026-04-04 00:18:06.120359 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-04 00:18:06.186628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-04 00:18:06.186748 | orchestrator | 2026-04-04 00:18:06.186775 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-04 00:18:06.226544 | orchestrator | ok: [testbed-manager] 2026-04-04 00:18:06.226614 | orchestrator | 2026-04-04 00:18:06.226628 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-04 00:18:08.232348 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-04 00:18:08.232461 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-04 00:18:08.232478 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-04 00:18:08.232490 | orchestrator | 2026-04-04 00:18:08.232503 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-04 00:18:08.948513 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:08.948617 | orchestrator | 2026-04-04 00:18:08.948637 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-04 00:18:09.626297 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:09.626428 | orchestrator | 2026-04-04 00:18:09.626445 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-04 00:18:10.311484 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:10.311582 | orchestrator | 2026-04-04 00:18:10.311598 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-04 00:18:10.374716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-04 00:18:10.374807 | orchestrator | 2026-04-04 00:18:10.374822 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-04 00:18:10.419419 | orchestrator | ok: [testbed-manager] 2026-04-04 00:18:10.419507 | orchestrator | 2026-04-04 00:18:10.419519 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-04 00:18:11.095599 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-04 00:18:11.095700 | orchestrator | 2026-04-04 00:18:11.095717 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-04 00:18:11.179548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-04 00:18:11.179643 | orchestrator | 2026-04-04 00:18:11.179658 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-04 00:18:11.853587 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:11.853690 | orchestrator | 2026-04-04 00:18:11.853706 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-04 00:18:12.463004 | orchestrator | ok: [testbed-manager] 2026-04-04 00:18:12.463102 | orchestrator | 2026-04-04 00:18:12.463124 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-04 00:18:12.518941 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:18:12.519040 | orchestrator | 2026-04-04 00:18:12.519054 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-04 00:18:12.565458 | orchestrator | ok: [testbed-manager] 2026-04-04 00:18:12.565537 | orchestrator | 2026-04-04 00:18:12.565548 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-04 00:18:13.364303 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:13.364383 | orchestrator | 2026-04-04 00:18:13.364398 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-04 00:19:19.551807 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:19.551892 | orchestrator | 2026-04-04 00:19:19.551902 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-04 00:19:20.476227 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:20.476342 | orchestrator | 2026-04-04 00:19:20.476354 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-04 00:19:20.517541 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:20.517634 | orchestrator | 2026-04-04 00:19:20.517672 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-04 00:19:22.748726 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:22.748874 | orchestrator | 2026-04-04 00:19:22.748893 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-04 00:19:22.844564 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:22.844641 | orchestrator | 2026-04-04 00:19:22.844652 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-04 00:19:22.844661 | orchestrator | 2026-04-04 00:19:22.844669 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-04 00:19:22.903801 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:22.903895 | orchestrator | 2026-04-04 00:19:22.903909 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-04 00:20:22.951606 | orchestrator | Pausing for 60 seconds 2026-04-04 00:20:22.951722 | orchestrator | changed: [testbed-manager] 2026-04-04 00:20:22.951738 | orchestrator | 2026-04-04 00:20:22.951753 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-04 00:20:25.972217 | orchestrator | changed: [testbed-manager] 2026-04-04 00:20:25.972302 | orchestrator | 2026-04-04 00:20:25.972313 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-04 00:21:07.320630 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-04 00:21:07.320740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-04 00:21:07.320756 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:07.320770 | orchestrator | 2026-04-04 00:21:07.320782 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-04 00:21:12.795404 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:12.795557 | orchestrator | 2026-04-04 00:21:12.795574 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-04 00:21:12.882094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-04 00:21:12.882170 | orchestrator | 2026-04-04 00:21:12.882180 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-04 00:21:12.882188 | orchestrator | 2026-04-04 00:21:12.882195 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-04 00:21:12.934357 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:21:12.934506 | orchestrator | 2026-04-04 00:21:12.934522 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-04 00:21:13.007082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-04 00:21:13.007178 | orchestrator | 2026-04-04 00:21:13.007193 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-04 00:21:13.764435 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:13.764563 | orchestrator | 2026-04-04 00:21:13.764577 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-04 00:21:16.860078 | orchestrator | ok: [testbed-manager] 2026-04-04 00:21:16.860177 | orchestrator | 2026-04-04 00:21:16.860193 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-04 00:21:16.930234 | orchestrator | ok: [testbed-manager] => { 2026-04-04 00:21:16.930328 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-04 00:21:16.930342 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-04 00:21:16.930354 | orchestrator | "Checking running containers against expected versions...", 2026-04-04 00:21:16.930366 | orchestrator | "", 2026-04-04 00:21:16.930378 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-04 00:21:16.930389 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-04 00:21:16.930400 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930411 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-04 00:21:16.930422 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930433 | orchestrator | "", 2026-04-04 00:21:16.930444 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-04 00:21:16.930569 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-04 00:21:16.930583 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930594 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-04 00:21:16.930605 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930616 | orchestrator | "", 2026-04-04 00:21:16.930627 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-04 00:21:16.930638 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-04 00:21:16.930649 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930660 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-04 00:21:16.930671 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930683 | orchestrator | "", 2026-04-04 00:21:16.930694 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-04 00:21:16.930706 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-04 00:21:16.930717 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930728 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-04 00:21:16.930739 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930775 | orchestrator | "", 2026-04-04 00:21:16.930788 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-04 00:21:16.930801 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-04 00:21:16.930815 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930827 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-04 00:21:16.930840 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930852 | orchestrator | "", 2026-04-04 00:21:16.930865 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-04 00:21:16.930877 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.930890 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930902 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.930915 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.930927 | orchestrator | "", 2026-04-04 00:21:16.930939 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-04 00:21:16.930952 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-04 00:21:16.930964 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.930978 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-04 00:21:16.930990 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931003 | orchestrator | "", 2026-04-04 00:21:16.931015 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-04 00:21:16.931037 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-04 00:21:16.931049 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931062 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-04 00:21:16.931075 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931087 | orchestrator | "", 2026-04-04 00:21:16.931105 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-04 00:21:16.931119 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-04 00:21:16.931132 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931143 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-04 00:21:16.931154 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931165 | orchestrator | "", 2026-04-04 00:21:16.931176 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-04 00:21:16.931187 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-04 00:21:16.931198 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931208 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-04 00:21:16.931219 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931230 | orchestrator | "", 2026-04-04 00:21:16.931242 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-04 00:21:16.931261 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931285 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931313 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931333 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931352 | orchestrator | "", 2026-04-04 00:21:16.931377 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-04 00:21:16.931399 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931418 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931436 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931489 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931508 | orchestrator | "", 2026-04-04 00:21:16.931525 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-04 00:21:16.931543 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931561 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931580 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931598 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931616 | orchestrator | "", 2026-04-04 00:21:16.931635 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-04 00:21:16.931669 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931688 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931705 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931723 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931741 | orchestrator | "", 2026-04-04 00:21:16.931752 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-04 00:21:16.931783 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931795 | orchestrator | " Enabled: true", 2026-04-04 00:21:16.931805 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:21:16.931816 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:21:16.931827 | orchestrator | "", 2026-04-04 00:21:16.931837 | orchestrator | "=== Summary ===", 2026-04-04 00:21:16.931848 | orchestrator | "Errors (version mismatches): 0", 2026-04-04 00:21:16.931859 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-04 00:21:16.931870 | orchestrator | "", 2026-04-04 00:21:16.931880 | orchestrator | "✅ All running containers match expected versions!" 2026-04-04 00:21:16.931891 | orchestrator | ] 2026-04-04 00:21:16.931902 | orchestrator | } 2026-04-04 00:21:16.931913 | orchestrator | 2026-04-04 00:21:16.931924 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-04 00:21:16.975618 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:21:16.975708 | orchestrator | 2026-04-04 00:21:16.975722 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:21:16.975738 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-04 00:21:16.975750 | orchestrator | 2026-04-04 00:21:17.061870 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-04 00:21:17.061954 | orchestrator | + deactivate 2026-04-04 00:21:17.061968 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-04 00:21:17.061980 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:21:17.061990 | orchestrator | + export PATH 2026-04-04 00:21:17.062000 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-04 00:21:17.062010 | orchestrator | + '[' -n '' ']' 2026-04-04 00:21:17.062073 | orchestrator | + hash -r 2026-04-04 00:21:17.062084 | orchestrator | + '[' -n '' ']' 2026-04-04 00:21:17.062094 | orchestrator | + unset VIRTUAL_ENV 2026-04-04 00:21:17.062103 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-04 00:21:17.062113 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-04 00:21:17.062123 | orchestrator | + unset -f deactivate 2026-04-04 00:21:17.062134 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-04 00:21:17.069428 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 00:21:17.069484 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-04 00:21:17.069495 | orchestrator | + local max_attempts=60 2026-04-04 00:21:17.069505 | orchestrator | + local name=ceph-ansible 2026-04-04 00:21:17.069515 | orchestrator | + local attempt_num=1 2026-04-04 00:21:17.070241 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:21:17.105040 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:21:17.105107 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-04 00:21:17.105120 | orchestrator | + local max_attempts=60 2026-04-04 00:21:17.105131 | orchestrator | + local name=kolla-ansible 2026-04-04 00:21:17.105142 | orchestrator | + local attempt_num=1 2026-04-04 00:21:17.105385 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-04 00:21:17.138137 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:21:17.138220 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-04 00:21:17.138233 | orchestrator | + local max_attempts=60 2026-04-04 00:21:17.138245 | orchestrator | + local name=osism-ansible 2026-04-04 00:21:17.138257 | orchestrator | + local attempt_num=1 2026-04-04 00:21:17.138722 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-04 00:21:17.165692 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:21:17.165796 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-04 00:21:17.165817 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-04 00:21:17.805978 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-04 00:21:17.977330 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-04 00:21:17.977414 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977425 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977432 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-04 00:21:17.977440 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-04 00:21:17.977474 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977481 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977505 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-04 00:21:17.977512 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977518 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-04 00:21:17.977524 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977531 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-04 00:21:17.977537 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977544 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-04 00:21:17.977550 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.977556 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-04 00:21:17.983069 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:21:18.033814 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:21:18.033938 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:21:18.033969 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-04 00:21:18.037734 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-04 00:21:30.316658 | orchestrator | 2026-04-04 00:21:30 | INFO  | Prepare task for execution of resolvconf. 2026-04-04 00:21:30.532533 | orchestrator | 2026-04-04 00:21:30 | INFO  | Task b9a00e88-8293-4a32-85b6-2ea9a0cad55d (resolvconf) was prepared for execution. 2026-04-04 00:21:30.532621 | orchestrator | 2026-04-04 00:21:30 | INFO  | It takes a moment until task b9a00e88-8293-4a32-85b6-2ea9a0cad55d (resolvconf) has been started and output is visible here. 2026-04-04 00:21:43.494980 | orchestrator | 2026-04-04 00:21:43.495095 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-04 00:21:43.495114 | orchestrator | 2026-04-04 00:21:43.495126 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:21:43.495138 | orchestrator | Saturday 04 April 2026 00:21:33 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-04-04 00:21:43.495150 | orchestrator | ok: [testbed-manager] 2026-04-04 00:21:43.495162 | orchestrator | 2026-04-04 00:21:43.495174 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-04 00:21:43.495186 | orchestrator | Saturday 04 April 2026 00:21:38 +0000 (0:00:04.615) 0:00:04.786 ******** 2026-04-04 00:21:43.495197 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:21:43.495209 | orchestrator | 2026-04-04 00:21:43.495220 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-04 00:21:43.495231 | orchestrator | Saturday 04 April 2026 00:21:38 +0000 (0:00:00.042) 0:00:04.829 ******** 2026-04-04 00:21:43.495242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-04 00:21:43.495255 | orchestrator | 2026-04-04 00:21:43.495266 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-04 00:21:43.495288 | orchestrator | Saturday 04 April 2026 00:21:38 +0000 (0:00:00.080) 0:00:04.910 ******** 2026-04-04 00:21:43.495300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:21:43.495311 | orchestrator | 2026-04-04 00:21:43.495322 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-04 00:21:43.495334 | orchestrator | Saturday 04 April 2026 00:21:38 +0000 (0:00:00.066) 0:00:04.976 ******** 2026-04-04 00:21:43.495345 | orchestrator | ok: [testbed-manager] 2026-04-04 00:21:43.495357 | orchestrator | 2026-04-04 00:21:43.495368 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-04 00:21:43.495379 | orchestrator | Saturday 04 April 2026 00:21:39 +0000 (0:00:00.923) 0:00:05.900 ******** 2026-04-04 00:21:43.495390 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:21:43.495401 | orchestrator | 2026-04-04 00:21:43.495412 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-04 00:21:43.495423 | orchestrator | Saturday 04 April 2026 00:21:39 +0000 (0:00:00.058) 0:00:05.959 ******** 2026-04-04 00:21:43.495434 | orchestrator | ok: [testbed-manager] 2026-04-04 00:21:43.495445 | orchestrator | 2026-04-04 00:21:43.495506 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-04 00:21:43.495530 | orchestrator | Saturday 04 April 2026 00:21:39 +0000 (0:00:00.483) 0:00:06.442 ******** 2026-04-04 00:21:43.495549 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:21:43.495565 | orchestrator | 2026-04-04 00:21:43.495579 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-04 00:21:43.495593 | orchestrator | Saturday 04 April 2026 00:21:39 +0000 (0:00:00.067) 0:00:06.509 ******** 2026-04-04 00:21:43.495605 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:43.495618 | orchestrator | 2026-04-04 00:21:43.495631 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-04 00:21:43.495643 | orchestrator | Saturday 04 April 2026 00:21:40 +0000 (0:00:00.489) 0:00:06.999 ******** 2026-04-04 00:21:43.495656 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:43.495668 | orchestrator | 2026-04-04 00:21:43.495681 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-04 00:21:43.495719 | orchestrator | Saturday 04 April 2026 00:21:41 +0000 (0:00:00.895) 0:00:07.895 ******** 2026-04-04 00:21:43.495732 | orchestrator | ok: [testbed-manager] 2026-04-04 00:21:43.495745 | orchestrator | 2026-04-04 00:21:43.495758 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-04 00:21:43.495770 | orchestrator | Saturday 04 April 2026 00:21:42 +0000 (0:00:00.865) 0:00:08.761 ******** 2026-04-04 00:21:43.495783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-04 00:21:43.495796 | orchestrator | 2026-04-04 00:21:43.495808 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-04 00:21:43.495821 | orchestrator | Saturday 04 April 2026 00:21:42 +0000 (0:00:00.081) 0:00:08.843 ******** 2026-04-04 00:21:43.495833 | orchestrator | changed: [testbed-manager] 2026-04-04 00:21:43.495846 | orchestrator | 2026-04-04 00:21:43.495859 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:21:43.495871 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:21:43.495882 | orchestrator | 2026-04-04 00:21:43.495893 | orchestrator | 2026-04-04 00:21:43.495904 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:21:43.495914 | orchestrator | Saturday 04 April 2026 00:21:43 +0000 (0:00:01.133) 0:00:09.976 ******** 2026-04-04 00:21:43.495925 | orchestrator | =============================================================================== 2026-04-04 00:21:43.495936 | orchestrator | Gathering Facts --------------------------------------------------------- 4.62s 2026-04-04 00:21:43.495947 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2026-04-04 00:21:43.495958 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.92s 2026-04-04 00:21:43.495969 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.90s 2026-04-04 00:21:43.495979 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.87s 2026-04-04 00:21:43.495990 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.49s 2026-04-04 00:21:43.496020 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2026-04-04 00:21:43.496032 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-04 00:21:43.496043 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-04 00:21:43.496054 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-04-04 00:21:43.496065 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-04 00:21:43.496082 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-04 00:21:43.496094 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.04s 2026-04-04 00:21:43.675680 | orchestrator | + osism apply sshconfig 2026-04-04 00:21:55.036611 | orchestrator | 2026-04-04 00:21:55 | INFO  | Prepare task for execution of sshconfig. 2026-04-04 00:21:55.114513 | orchestrator | 2026-04-04 00:21:55 | INFO  | Task d941a3ce-82ae-4a32-aaf9-8e0dab76f60d (sshconfig) was prepared for execution. 2026-04-04 00:21:55.114589 | orchestrator | 2026-04-04 00:21:55 | INFO  | It takes a moment until task d941a3ce-82ae-4a32-aaf9-8e0dab76f60d (sshconfig) has been started and output is visible here. 2026-04-04 00:22:05.829922 | orchestrator | 2026-04-04 00:22:05.830095 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-04 00:22:05.830117 | orchestrator | 2026-04-04 00:22:05.830129 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-04 00:22:05.830141 | orchestrator | Saturday 04 April 2026 00:21:58 +0000 (0:00:00.193) 0:00:00.193 ******** 2026-04-04 00:22:05.830182 | orchestrator | ok: [testbed-manager] 2026-04-04 00:22:05.830195 | orchestrator | 2026-04-04 00:22:05.830206 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-04 00:22:05.830218 | orchestrator | Saturday 04 April 2026 00:21:59 +0000 (0:00:00.887) 0:00:01.080 ******** 2026-04-04 00:22:05.830229 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:05.830240 | orchestrator | 2026-04-04 00:22:05.830252 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-04 00:22:05.830263 | orchestrator | Saturday 04 April 2026 00:21:59 +0000 (0:00:00.527) 0:00:01.607 ******** 2026-04-04 00:22:05.830274 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:22:05.830285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:22:05.830296 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:22:05.830307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:22:05.830318 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:22:05.830329 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:22:05.830341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:22:05.830353 | orchestrator | 2026-04-04 00:22:05.830365 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-04 00:22:05.830376 | orchestrator | Saturday 04 April 2026 00:22:05 +0000 (0:00:05.332) 0:00:06.940 ******** 2026-04-04 00:22:05.830387 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:22:05.830399 | orchestrator | 2026-04-04 00:22:05.830411 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-04 00:22:05.830423 | orchestrator | Saturday 04 April 2026 00:22:05 +0000 (0:00:00.105) 0:00:07.046 ******** 2026-04-04 00:22:05.830433 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:05.830444 | orchestrator | 2026-04-04 00:22:05.830455 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:22:05.830467 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:22:05.830503 | orchestrator | 2026-04-04 00:22:05.830515 | orchestrator | 2026-04-04 00:22:05.830527 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:22:05.830539 | orchestrator | Saturday 04 April 2026 00:22:05 +0000 (0:00:00.536) 0:00:07.583 ******** 2026-04-04 00:22:05.830550 | orchestrator | =============================================================================== 2026-04-04 00:22:05.830562 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.33s 2026-04-04 00:22:05.830575 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.89s 2026-04-04 00:22:05.830588 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-04-04 00:22:05.830601 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-04-04 00:22:05.830613 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-04 00:22:06.005080 | orchestrator | + osism apply known-hosts 2026-04-04 00:22:17.341553 | orchestrator | 2026-04-04 00:22:17 | INFO  | Prepare task for execution of known-hosts. 2026-04-04 00:22:17.422289 | orchestrator | 2026-04-04 00:22:17 | INFO  | Task 5e51b9bf-f802-4db7-962a-fcc468f588c2 (known-hosts) was prepared for execution. 2026-04-04 00:22:17.422375 | orchestrator | 2026-04-04 00:22:17 | INFO  | It takes a moment until task 5e51b9bf-f802-4db7-962a-fcc468f588c2 (known-hosts) has been started and output is visible here. 2026-04-04 00:22:32.488687 | orchestrator | 2026-04-04 00:22:32.488828 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-04 00:22:32.488860 | orchestrator | 2026-04-04 00:22:32.488882 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-04 00:22:32.488904 | orchestrator | Saturday 04 April 2026 00:22:20 +0000 (0:00:00.217) 0:00:00.217 ******** 2026-04-04 00:22:32.488955 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:22:32.488976 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:22:32.488994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:22:32.489012 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:22:32.489030 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:22:32.489049 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:22:32.489080 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:22:32.489099 | orchestrator | 2026-04-04 00:22:32.489118 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-04 00:22:32.489139 | orchestrator | Saturday 04 April 2026 00:22:26 +0000 (0:00:06.297) 0:00:06.515 ******** 2026-04-04 00:22:32.489159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-04 00:22:32.489182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-04 00:22:32.489201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-04 00:22:32.489223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-04 00:22:32.489242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-04 00:22:32.489258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-04 00:22:32.489271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-04 00:22:32.489283 | orchestrator | 2026-04-04 00:22:32.489296 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489308 | orchestrator | Saturday 04 April 2026 00:22:27 +0000 (0:00:00.164) 0:00:06.679 ******** 2026-04-04 00:22:32.489326 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChnAziZjqWyMSBm2NJSyfit3ybSESeKZMtNZwwzaVD/SzoI4NLputbFWWalaSIklgV6Ntte21p41uNASFIfC2W2YYGwwFhoMyq8nrblRuFHkQz1HjZFA5OGxK4/SpTYYhOR0SNVI22ps0+QRxWNYn5XUn84I5jdnQ1srcufBWDfRK73+Bs+PvlgFSJLCPddz9oYndeh9izXGivo9cTkBp+38WeZq/EUNBhbdmo089NJeGlC6i8bxUlU7g6qYluZeypwnc7gJy0WpJa5EZu1xVXnp0XL7yYmqnOdPmYFnu/jvNOJguNn55htaJ0wciROjzTT7JgSosk5+iNkdgdpjLt5qan25YNVwdlUOAnIPBw/QFELEVduqmj9W1QX+7RpxCFtPoElHMQaUExGi1F9nNmr2Zrq1PV4OKsrneEFYTVBmSU9KG2RoQC17eKskWQALOeE7r2ZL4A/aWR4UTsknnJtzE2Oe+gBk+b9VS8fEMJACzUvb0HoawQkHz1Pe4rArs=) 2026-04-04 00:22:32.489343 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMqbUJQF5mDpnWWgv0I53LKSrRt9XMrxZOnmFeOHN0HVbqAjibNnTG8wVpDnZA23hTQ7DfjVjKAzprcy2TbQLCs=) 2026-04-04 00:22:32.489358 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBQP5Ft8dsP0sFg4GI+oyLWmyKE4NlDXZwII29KkJqS) 2026-04-04 00:22:32.489372 | orchestrator | 2026-04-04 00:22:32.489384 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489396 | orchestrator | Saturday 04 April 2026 00:22:28 +0000 (0:00:01.201) 0:00:07.881 ******** 2026-04-04 00:22:32.489407 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxdPjQIgtn6BsOnni3HzmHkCEvmB/aqF977pqpxC2K+) 2026-04-04 00:22:32.489466 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLMguYlTfW74zmfx59ICl49/nFfbC3ajO8h1kFRW8fFVfNZEmTjCggo871G6n9akAOJqTQwS2u1l1AFTCbUFKOnH7XLtjL2yFnkuj3FgcOGZ2BjT/3LD/fo0guRU36/QfWNpPebRKatMNq4N2K/kD12kiJKlSZbPkegHLTXyHq53PGSblirypJ7y3Wcquxv/XuTmqtUHF4qPhTA2usefHHsE6K8FWtkhH/5EC3k4r0Iek1Z5S6fTR4+QAHjK8LErME8Nlb/UT8KO1oSu4bscgOuyRmckM0HsivN9t8ED4TRpFdyJH4iViR+Z0YE0YwSzOBhEsMWgLetGRGUQTbxkTGruOnXLyvc+yDEqIG1J02Zhwr2PhIsXrEa8P/7GiXeaeISsyHIg2C8Z6KYol94MPI8e0rfZK/efdCFo1JgCiX9Jythrrh0P38tk0LJh/r+Ss3Du9MK6CO1+JzQ1dupbBH83KeO2yV6BTx7+gnTOf4Dxjz+Mln9c8MvgCygYY+glU=) 2026-04-04 00:22:32.489480 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMEKsYEVj8kkW+VgIWvWBbCErbtYGGVU8nzN3joKQhQktlBjqKc4ijdcU6VV8LcS4MCt+7vOlmN9RtmQ6/3mG0E=) 2026-04-04 00:22:32.489491 | orchestrator | 2026-04-04 00:22:32.489502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489575 | orchestrator | Saturday 04 April 2026 00:22:29 +0000 (0:00:00.966) 0:00:08.848 ******** 2026-04-04 00:22:32.489587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8YUaFgP5mqR2xOjO6mcsPsClod0jDTWiceEUaQNCYs) 2026-04-04 00:22:32.489670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7xNueKi7G2joobEsav+1xIWkve4nzProGFKhrxIxYhHtcHTyQO6ByJKFByP18vlBtYvwlwI23DD7BkBUxpHzqMLnxwvzvQtAX6+M+xf5nSj3yVZmZRkhQUJMTWRwXzt9QqSXZY5RL7BlkEcyi9SZjz3Ho0jeXq08nEKE7syih1t0QIzfbRGDIK+p0Z60tVBwEum5pR6xT+XvxrABhE5YisdgZrk6UoImf9QVLQPZ3Fg328azVcQnpr861d46j4xqe+GYIcFbduL5mPr5N/I0qLKk51+nOki8w8YNNwh3uBIy9Hzh0JZPgvoIwnBzG/ZMXqnyp5IW5cxOudBHORT0nSeobvlmhDkbrWlVX4l0TMWcaXWw/hooB6QTqVp/aAoacb6ix0+W3bZD6XDPDw+qAg/xonkJ/nSKqu1fG5StQYs1IdbSD6ageA08Hy2zAoeJsDrA6ETdmJPXKkW6x+7M8GWgUiZyeCdG9ktGo5UGw0i6xc/RHQlWZTibCXO09UUk=) 2026-04-04 00:22:32.489683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBApEcEEc3v8zsawVBV3UyedN9t9BTvUQUBCm9YqgyyhjCse4+lvDCDe1ewTwpCPCNwKAKQoULfPVrX6ABhrvLU8=) 2026-04-04 00:22:32.489694 | orchestrator | 2026-04-04 00:22:32.489705 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489716 | orchestrator | Saturday 04 April 2026 00:22:30 +0000 (0:00:00.965) 0:00:09.813 ******** 2026-04-04 00:22:32.489731 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4YCUoB87IOtKyTCZs5/uhp2LU4LzG/Y8i+dqvsnhBysvMswLoqX9EqAi75GgKNJ09olip3BmDTdOv/MSQCc0FCcryL3+hPtmwY6kd/AyM75Ev+8yDf6Lum5JNp7vq170X6MzZbRTNCnvkhTr26Z/S35YpQgo7GLT1LRqh1/cbmcPYUzU3X88RPhb7DS+HTRQURxbeZSDPzUH3Zh/RpVERJmCrO6jQ1n8KJQQbDYPnBPfCo4AjfVMIjzshC9sRLwfL/rbdw3sV8/N3P/u4u55OKxwrEhqpI/mUfloreovA4aIYJivEXMjsd/pZubS5oruu5h6gdnkyC0nXPDYSn9S3hKh+26Y4dnq6ZP7MxviVve+suw5FNdiukQNJ5W13KOjYaW42/Z1eWrJjTNE/s7CkJVh7DBo4UfMCZF9EKnnRnQitgemQrK490wMwckebGnZQsJPi2waEZmwDVWH0x3JENUG8IiaXaP10n7N5Svy2gEXJtSlbaS8aolSoynVhT4E=) 2026-04-04 00:22:32.489743 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBODfKpPpwvETlCqv8Buz0tuLNV2WoSbeBjI6aYSDAMmnhXM9/cvOIV4LB+OJ3TAe4swlto5ALyqlmhTKreyHcqw=) 2026-04-04 00:22:32.489754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM993qsuu5d55lWHSSmKfTeagxu38pow8KDD7cmMLZby) 2026-04-04 00:22:32.489765 | orchestrator | 2026-04-04 00:22:32.489776 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489787 | orchestrator | Saturday 04 April 2026 00:22:31 +0000 (0:00:00.994) 0:00:10.808 ******** 2026-04-04 00:22:32.489799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr3xhBPzoJrmhyLYX3xqL1AqTGOJS9bawWUVGpluz45Th6SrduJ9yeDJrY2OAYDFIPr1rCZXCmYjOmZXGZNG7c7X14IK2dWMrAfiYuhoKnlBR0CupQLtU8JzQZmp39Ucxq76HfuptpkSrLf8upfhh00Igl+1gRrX+vLofPxY9FWlz7+CAwpu4K4dnftcr5ZO5FP7BJ7rC7epO6tg3hUoxVd9LENTh860pBvuQKNEAbT4t3kxjJHsk/aT3eF6oFF9DV4yHBipUiZSOP2SCZ8v/297J6ifTgVe+sZm2IabTDVDmAQ6GzJZiPnmGr4KFxYynC+yiIDFmpljLFWItcAcsH9e/jVgd2NVmCfUO++nKb26E5usBbX9H4zS08Z3Hme2ABEXr0lllLAbU6idGzKF8Uz4uZVyxcqyVn2AwYE4rAhRsH2wqX84Jx/wpgigtjNgw0faNFH1tvR8HsVGF0QRFGUoK5wh7lnhowBZiP1nQ93SBqhKWd1SpQ7fyWuEOXKW8=) 2026-04-04 00:22:32.489818 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIPWcHqlYQFfrNsmtH+HZli9mUSj3hMy+42IVmkL8QYHj/2Dsni0knoCzzvOK8cbl7B0oAiyhxk3zzZonPTf6Tw=) 2026-04-04 00:22:32.489829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA/RUVHoWDUG/f2GbRez/Oplye/yfHlGqTuyqBziV5pU) 2026-04-04 00:22:32.489840 | orchestrator | 2026-04-04 00:22:32.489852 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:32.489862 | orchestrator | Saturday 04 April 2026 00:22:32 +0000 (0:00:01.000) 0:00:11.808 ******** 2026-04-04 00:22:32.489882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwBHbFcISLzPbt6jIqUX8LybDDM+RfidZLQHUQgsW2EkFDVESUtsXhokDobn/yMpjv6CeGnxFAaS8dU58o2K6Q=) 2026-04-04 00:22:43.056229 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEOMekgma5OGjowjd0V5mQr0rg35qnvTJ1xOuBURPsVAiEGR85vUxKwNRIQPTDoMG5T+MpDA/tfMEN6k7O53zBv5AEH5KfO1Mt57SjvflEWt815Nky4DaQ9s6RfRsC5eWE77v/40zz7d7YLzCDbbtT0wFyfMSZGeE94hGkfzAFAShRZ/RbJEY6NySUN6tWr4NR3k4pSIw83iU2binIHm4p34tEry81mBjU+eLjMz9obRlIAfKimNn02qwdcBeF+dz+5w22c/PxAhuddNk+E2m5SntT/gjiWd/UPCLyEADj6uIG6wE8n4gFqN3IxtF52UtfNaLPGptxD1mM+ofMi7esxoYoTP9/2zDPMCXfC7/uUNHJZHLUQ3ZjcX6EWquQ7dcfWaygeF4N8PCR/2c+Yxf5YfSwSMK8gHYHuvOIFHok7OGW6k4Is4uMjX9zxhZWUEMn7cXqxmyY2Q70I8WlQiPylK75h/R8kwgcM8oi9uAFNj/A8EZkHTL7KPfBgm8vWVM=) 2026-04-04 00:22:43.056343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHG99WJ2PuaBmuCFGrHxcstMBGMqS7nHMj/NaIk2JsLZ) 2026-04-04 00:22:43.056359 | orchestrator | 2026-04-04 00:22:43.056372 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:43.056383 | orchestrator | Saturday 04 April 2026 00:22:33 +0000 (0:00:01.008) 0:00:12.816 ******** 2026-04-04 00:22:43.056394 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdV9wukTlrL7gNPT1hAh8E8CjodzZ4RuvRyBwjOE8QFGITcQGkNKf9HwJ/vcZYaqN7szOipTARwnAoKnwI6omM=) 2026-04-04 00:22:43.056406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPvjJe5C71y1skmuTQidsyvFVBczC8i3Ez+mge4qQB0V) 2026-04-04 00:22:43.056417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmnLIGTG3jqr7NRpE3lXW9rwQYPzPbp/zyHpf3uj1Xpf3MWDyOpFcXS9khq9+ILrmDTUHo3LOmh9+i/bzNeJomgTgut3tqkV7NzVJv6szyxiRLWQkA0ZQlDgyeSCIWdRda8v8b7bZgHXKmeUggj+IzYwDOpJ/xc1UNznaqAgRjR0niErkd7sEDelOsQpBRkjS5Bv+9wj9yWTF9YkU5Tq90GvDQ7gmkuAMYi7LOSpKS317JzkidlM47w2nW0slCigN+dbTXSQzlz0uxTmzopPMsofNa8HPeSpFBFpeWgGJO6L4xEX/tnP5Rqj61Rz01GXxtlAdPRl7xbfqSmxEpIYIVIePLaHEUcu2knyKvmxflrNoFmkZECqg9f2bTvvtQyU1QUP87GRvJk91hYOcwhjh4Q19WNFo5t6UIE98GZTGJ90mYeodESFInZ7L4cFyTVVY5IFnpP35oCqKLjvTkuTA25cEVjQZNyTE0hZvS7/a2hA9SJAKe+nsHhL9f0v4Z9XM=) 2026-04-04 00:22:43.056427 | orchestrator | 2026-04-04 00:22:43.056437 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-04 00:22:43.056449 | orchestrator | Saturday 04 April 2026 00:22:34 +0000 (0:00:01.017) 0:00:13.834 ******** 2026-04-04 00:22:43.056459 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:22:43.056487 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:22:43.056498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:22:43.056577 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:22:43.056589 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:22:43.056599 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:22:43.056608 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:22:43.056618 | orchestrator | 2026-04-04 00:22:43.056628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-04 00:22:43.056638 | orchestrator | Saturday 04 April 2026 00:22:39 +0000 (0:00:05.189) 0:00:19.024 ******** 2026-04-04 00:22:43.056649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-04 00:22:43.056661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-04 00:22:43.056671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-04 00:22:43.056680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-04 00:22:43.056690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-04 00:22:43.056700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-04 00:22:43.056710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-04 00:22:43.056719 | orchestrator | 2026-04-04 00:22:43.056745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:43.056756 | orchestrator | Saturday 04 April 2026 00:22:39 +0000 (0:00:00.179) 0:00:19.203 ******** 2026-04-04 00:22:43.056768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBQP5Ft8dsP0sFg4GI+oyLWmyKE4NlDXZwII29KkJqS) 2026-04-04 00:22:43.056785 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChnAziZjqWyMSBm2NJSyfit3ybSESeKZMtNZwwzaVD/SzoI4NLputbFWWalaSIklgV6Ntte21p41uNASFIfC2W2YYGwwFhoMyq8nrblRuFHkQz1HjZFA5OGxK4/SpTYYhOR0SNVI22ps0+QRxWNYn5XUn84I5jdnQ1srcufBWDfRK73+Bs+PvlgFSJLCPddz9oYndeh9izXGivo9cTkBp+38WeZq/EUNBhbdmo089NJeGlC6i8bxUlU7g6qYluZeypwnc7gJy0WpJa5EZu1xVXnp0XL7yYmqnOdPmYFnu/jvNOJguNn55htaJ0wciROjzTT7JgSosk5+iNkdgdpjLt5qan25YNVwdlUOAnIPBw/QFELEVduqmj9W1QX+7RpxCFtPoElHMQaUExGi1F9nNmr2Zrq1PV4OKsrneEFYTVBmSU9KG2RoQC17eKskWQALOeE7r2ZL4A/aWR4UTsknnJtzE2Oe+gBk+b9VS8fEMJACzUvb0HoawQkHz1Pe4rArs=) 2026-04-04 00:22:43.056797 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMqbUJQF5mDpnWWgv0I53LKSrRt9XMrxZOnmFeOHN0HVbqAjibNnTG8wVpDnZA23hTQ7DfjVjKAzprcy2TbQLCs=) 2026-04-04 00:22:43.056810 | orchestrator | 2026-04-04 00:22:43.056821 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:43.056833 | orchestrator | Saturday 04 April 2026 00:22:40 +0000 (0:00:01.052) 0:00:20.256 ******** 2026-04-04 00:22:43.056845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLMguYlTfW74zmfx59ICl49/nFfbC3ajO8h1kFRW8fFVfNZEmTjCggo871G6n9akAOJqTQwS2u1l1AFTCbUFKOnH7XLtjL2yFnkuj3FgcOGZ2BjT/3LD/fo0guRU36/QfWNpPebRKatMNq4N2K/kD12kiJKlSZbPkegHLTXyHq53PGSblirypJ7y3Wcquxv/XuTmqtUHF4qPhTA2usefHHsE6K8FWtkhH/5EC3k4r0Iek1Z5S6fTR4+QAHjK8LErME8Nlb/UT8KO1oSu4bscgOuyRmckM0HsivN9t8ED4TRpFdyJH4iViR+Z0YE0YwSzOBhEsMWgLetGRGUQTbxkTGruOnXLyvc+yDEqIG1J02Zhwr2PhIsXrEa8P/7GiXeaeISsyHIg2C8Z6KYol94MPI8e0rfZK/efdCFo1JgCiX9Jythrrh0P38tk0LJh/r+Ss3Du9MK6CO1+JzQ1dupbBH83KeO2yV6BTx7+gnTOf4Dxjz+Mln9c8MvgCygYY+glU=) 2026-04-04 00:22:43.056866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMEKsYEVj8kkW+VgIWvWBbCErbtYGGVU8nzN3joKQhQktlBjqKc4ijdcU6VV8LcS4MCt+7vOlmN9RtmQ6/3mG0E=) 2026-04-04 00:22:43.056878 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxdPjQIgtn6BsOnni3HzmHkCEvmB/aqF977pqpxC2K+) 2026-04-04 00:22:43.056889 | orchestrator | 2026-04-04 00:22:43.056901 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:43.056913 | orchestrator | Saturday 04 April 2026 00:22:41 +0000 (0:00:01.043) 0:00:21.299 ******** 2026-04-04 00:22:43.056925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBApEcEEc3v8zsawVBV3UyedN9t9BTvUQUBCm9YqgyyhjCse4+lvDCDe1ewTwpCPCNwKAKQoULfPVrX6ABhrvLU8=) 2026-04-04 00:22:43.056937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7xNueKi7G2joobEsav+1xIWkve4nzProGFKhrxIxYhHtcHTyQO6ByJKFByP18vlBtYvwlwI23DD7BkBUxpHzqMLnxwvzvQtAX6+M+xf5nSj3yVZmZRkhQUJMTWRwXzt9QqSXZY5RL7BlkEcyi9SZjz3Ho0jeXq08nEKE7syih1t0QIzfbRGDIK+p0Z60tVBwEum5pR6xT+XvxrABhE5YisdgZrk6UoImf9QVLQPZ3Fg328azVcQnpr861d46j4xqe+GYIcFbduL5mPr5N/I0qLKk51+nOki8w8YNNwh3uBIy9Hzh0JZPgvoIwnBzG/ZMXqnyp5IW5cxOudBHORT0nSeobvlmhDkbrWlVX4l0TMWcaXWw/hooB6QTqVp/aAoacb6ix0+W3bZD6XDPDw+qAg/xonkJ/nSKqu1fG5StQYs1IdbSD6ageA08Hy2zAoeJsDrA6ETdmJPXKkW6x+7M8GWgUiZyeCdG9ktGo5UGw0i6xc/RHQlWZTibCXO09UUk=) 2026-04-04 00:22:43.056949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8YUaFgP5mqR2xOjO6mcsPsClod0jDTWiceEUaQNCYs) 2026-04-04 00:22:43.056961 | orchestrator | 2026-04-04 00:22:43.056973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:43.056984 | orchestrator | Saturday 04 April 2026 00:22:42 +0000 (0:00:01.058) 0:00:22.357 ******** 2026-04-04 00:22:43.057011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4YCUoB87IOtKyTCZs5/uhp2LU4LzG/Y8i+dqvsnhBysvMswLoqX9EqAi75GgKNJ09olip3BmDTdOv/MSQCc0FCcryL3+hPtmwY6kd/AyM75Ev+8yDf6Lum5JNp7vq170X6MzZbRTNCnvkhTr26Z/S35YpQgo7GLT1LRqh1/cbmcPYUzU3X88RPhb7DS+HTRQURxbeZSDPzUH3Zh/RpVERJmCrO6jQ1n8KJQQbDYPnBPfCo4AjfVMIjzshC9sRLwfL/rbdw3sV8/N3P/u4u55OKxwrEhqpI/mUfloreovA4aIYJivEXMjsd/pZubS5oruu5h6gdnkyC0nXPDYSn9S3hKh+26Y4dnq6ZP7MxviVve+suw5FNdiukQNJ5W13KOjYaW42/Z1eWrJjTNE/s7CkJVh7DBo4UfMCZF9EKnnRnQitgemQrK490wMwckebGnZQsJPi2waEZmwDVWH0x3JENUG8IiaXaP10n7N5Svy2gEXJtSlbaS8aolSoynVhT4E=) 2026-04-04 00:22:47.687723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBODfKpPpwvETlCqv8Buz0tuLNV2WoSbeBjI6aYSDAMmnhXM9/cvOIV4LB+OJ3TAe4swlto5ALyqlmhTKreyHcqw=) 2026-04-04 00:22:47.687824 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM993qsuu5d55lWHSSmKfTeagxu38pow8KDD7cmMLZby) 2026-04-04 00:22:47.687840 | orchestrator | 2026-04-04 00:22:47.687852 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:47.687863 | orchestrator | Saturday 04 April 2026 00:22:43 +0000 (0:00:01.016) 0:00:23.374 ******** 2026-04-04 00:22:47.687876 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr3xhBPzoJrmhyLYX3xqL1AqTGOJS9bawWUVGpluz45Th6SrduJ9yeDJrY2OAYDFIPr1rCZXCmYjOmZXGZNG7c7X14IK2dWMrAfiYuhoKnlBR0CupQLtU8JzQZmp39Ucxq76HfuptpkSrLf8upfhh00Igl+1gRrX+vLofPxY9FWlz7+CAwpu4K4dnftcr5ZO5FP7BJ7rC7epO6tg3hUoxVd9LENTh860pBvuQKNEAbT4t3kxjJHsk/aT3eF6oFF9DV4yHBipUiZSOP2SCZ8v/297J6ifTgVe+sZm2IabTDVDmAQ6GzJZiPnmGr4KFxYynC+yiIDFmpljLFWItcAcsH9e/jVgd2NVmCfUO++nKb26E5usBbX9H4zS08Z3Hme2ABEXr0lllLAbU6idGzKF8Uz4uZVyxcqyVn2AwYE4rAhRsH2wqX84Jx/wpgigtjNgw0faNFH1tvR8HsVGF0QRFGUoK5wh7lnhowBZiP1nQ93SBqhKWd1SpQ7fyWuEOXKW8=) 2026-04-04 00:22:47.687914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIPWcHqlYQFfrNsmtH+HZli9mUSj3hMy+42IVmkL8QYHj/2Dsni0knoCzzvOK8cbl7B0oAiyhxk3zzZonPTf6Tw=) 2026-04-04 00:22:47.687926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA/RUVHoWDUG/f2GbRez/Oplye/yfHlGqTuyqBziV5pU) 2026-04-04 00:22:47.687936 | orchestrator | 2026-04-04 00:22:47.687975 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:47.687996 | orchestrator | Saturday 04 April 2026 00:22:44 +0000 (0:00:01.042) 0:00:24.417 ******** 2026-04-04 00:22:47.688006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHG99WJ2PuaBmuCFGrHxcstMBGMqS7nHMj/NaIk2JsLZ) 2026-04-04 00:22:47.688016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEOMekgma5OGjowjd0V5mQr0rg35qnvTJ1xOuBURPsVAiEGR85vUxKwNRIQPTDoMG5T+MpDA/tfMEN6k7O53zBv5AEH5KfO1Mt57SjvflEWt815Nky4DaQ9s6RfRsC5eWE77v/40zz7d7YLzCDbbtT0wFyfMSZGeE94hGkfzAFAShRZ/RbJEY6NySUN6tWr4NR3k4pSIw83iU2binIHm4p34tEry81mBjU+eLjMz9obRlIAfKimNn02qwdcBeF+dz+5w22c/PxAhuddNk+E2m5SntT/gjiWd/UPCLyEADj6uIG6wE8n4gFqN3IxtF52UtfNaLPGptxD1mM+ofMi7esxoYoTP9/2zDPMCXfC7/uUNHJZHLUQ3ZjcX6EWquQ7dcfWaygeF4N8PCR/2c+Yxf5YfSwSMK8gHYHuvOIFHok7OGW6k4Is4uMjX9zxhZWUEMn7cXqxmyY2Q70I8WlQiPylK75h/R8kwgcM8oi9uAFNj/A8EZkHTL7KPfBgm8vWVM=) 2026-04-04 00:22:47.688027 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwBHbFcISLzPbt6jIqUX8LybDDM+RfidZLQHUQgsW2EkFDVESUtsXhokDobn/yMpjv6CeGnxFAaS8dU58o2K6Q=) 2026-04-04 00:22:47.688037 | orchestrator | 2026-04-04 00:22:47.688047 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:22:47.688057 | orchestrator | Saturday 04 April 2026 00:22:45 +0000 (0:00:00.988) 0:00:25.406 ******** 2026-04-04 00:22:47.688066 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPvjJe5C71y1skmuTQidsyvFVBczC8i3Ez+mge4qQB0V) 2026-04-04 00:22:47.688076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmnLIGTG3jqr7NRpE3lXW9rwQYPzPbp/zyHpf3uj1Xpf3MWDyOpFcXS9khq9+ILrmDTUHo3LOmh9+i/bzNeJomgTgut3tqkV7NzVJv6szyxiRLWQkA0ZQlDgyeSCIWdRda8v8b7bZgHXKmeUggj+IzYwDOpJ/xc1UNznaqAgRjR0niErkd7sEDelOsQpBRkjS5Bv+9wj9yWTF9YkU5Tq90GvDQ7gmkuAMYi7LOSpKS317JzkidlM47w2nW0slCigN+dbTXSQzlz0uxTmzopPMsofNa8HPeSpFBFpeWgGJO6L4xEX/tnP5Rqj61Rz01GXxtlAdPRl7xbfqSmxEpIYIVIePLaHEUcu2knyKvmxflrNoFmkZECqg9f2bTvvtQyU1QUP87GRvJk91hYOcwhjh4Q19WNFo5t6UIE98GZTGJ90mYeodESFInZ7L4cFyTVVY5IFnpP35oCqKLjvTkuTA25cEVjQZNyTE0hZvS7/a2hA9SJAKe+nsHhL9f0v4Z9XM=) 2026-04-04 00:22:47.688087 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdV9wukTlrL7gNPT1hAh8E8CjodzZ4RuvRyBwjOE8QFGITcQGkNKf9HwJ/vcZYaqN7szOipTARwnAoKnwI6omM=) 2026-04-04 00:22:47.688097 | orchestrator | 2026-04-04 00:22:47.688106 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-04 00:22:47.688116 | orchestrator | Saturday 04 April 2026 00:22:46 +0000 (0:00:01.011) 0:00:26.418 ******** 2026-04-04 00:22:47.688126 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-04 00:22:47.688137 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 00:22:47.688163 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-04 00:22:47.688174 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-04 00:22:47.688183 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-04 00:22:47.688193 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-04 00:22:47.688203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-04 00:22:47.688221 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:22:47.688231 | orchestrator | 2026-04-04 00:22:47.688241 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-04 00:22:47.688251 | orchestrator | Saturday 04 April 2026 00:22:46 +0000 (0:00:00.176) 0:00:26.594 ******** 2026-04-04 00:22:47.688261 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:22:47.688270 | orchestrator | 2026-04-04 00:22:47.688280 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-04 00:22:47.688289 | orchestrator | Saturday 04 April 2026 00:22:46 +0000 (0:00:00.060) 0:00:26.655 ******** 2026-04-04 00:22:47.688299 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:22:47.688309 | orchestrator | 2026-04-04 00:22:47.688318 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-04 00:22:47.688329 | orchestrator | Saturday 04 April 2026 00:22:47 +0000 (0:00:00.066) 0:00:26.721 ******** 2026-04-04 00:22:47.688338 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:47.688348 | orchestrator | 2026-04-04 00:22:47.688358 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:22:47.688368 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:22:47.688379 | orchestrator | 2026-04-04 00:22:47.688389 | orchestrator | 2026-04-04 00:22:47.688399 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:22:47.688409 | orchestrator | Saturday 04 April 2026 00:22:47 +0000 (0:00:00.464) 0:00:27.186 ******** 2026-04-04 00:22:47.688418 | orchestrator | =============================================================================== 2026-04-04 00:22:47.688428 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.30s 2026-04-04 00:22:47.688438 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-04-04 00:22:47.688448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-04 00:22:47.688458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-04 00:22:47.688468 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-04 00:22:47.688477 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-04 00:22:47.688487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-04 00:22:47.688496 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-04 00:22:47.688506 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-04 00:22:47.688515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-04 00:22:47.688545 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-04 00:22:47.688571 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-04 00:22:47.688585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-04 00:22:47.688595 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-04 00:22:47.688605 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-04-04 00:22:47.688614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-04-04 00:22:47.688623 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2026-04-04 00:22:47.688633 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-04 00:22:47.688643 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-04 00:22:47.688652 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-04-04 00:22:47.844118 | orchestrator | + osism apply squid 2026-04-04 00:22:59.150698 | orchestrator | 2026-04-04 00:22:59 | INFO  | Prepare task for execution of squid. 2026-04-04 00:22:59.221853 | orchestrator | 2026-04-04 00:22:59 | INFO  | Task eb096d39-7f5f-49af-939f-9403ede6dbeb (squid) was prepared for execution. 2026-04-04 00:22:59.221994 | orchestrator | 2026-04-04 00:22:59 | INFO  | It takes a moment until task eb096d39-7f5f-49af-939f-9403ede6dbeb (squid) has been started and output is visible here. 2026-04-04 00:24:51.551069 | orchestrator | 2026-04-04 00:24:51.551177 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-04 00:24:51.551188 | orchestrator | 2026-04-04 00:24:51.551194 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-04 00:24:51.551208 | orchestrator | Saturday 04 April 2026 00:23:02 +0000 (0:00:00.144) 0:00:00.144 ******** 2026-04-04 00:24:51.551215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:24:51.551221 | orchestrator | 2026-04-04 00:24:51.551228 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-04 00:24:51.551233 | orchestrator | Saturday 04 April 2026 00:23:02 +0000 (0:00:00.065) 0:00:00.210 ******** 2026-04-04 00:24:51.551239 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:51.551247 | orchestrator | 2026-04-04 00:24:51.551253 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-04 00:24:51.551259 | orchestrator | Saturday 04 April 2026 00:23:04 +0000 (0:00:01.883) 0:00:02.094 ******** 2026-04-04 00:24:51.551265 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-04 00:24:51.551271 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-04 00:24:51.551277 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-04 00:24:51.551282 | orchestrator | 2026-04-04 00:24:51.551288 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-04 00:24:51.551294 | orchestrator | Saturday 04 April 2026 00:23:05 +0000 (0:00:01.092) 0:00:03.186 ******** 2026-04-04 00:24:51.551300 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-04 00:24:51.551306 | orchestrator | 2026-04-04 00:24:51.551312 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-04 00:24:51.551317 | orchestrator | Saturday 04 April 2026 00:23:06 +0000 (0:00:00.880) 0:00:04.067 ******** 2026-04-04 00:24:51.551323 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:51.551329 | orchestrator | 2026-04-04 00:24:51.551335 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-04 00:24:51.551341 | orchestrator | Saturday 04 April 2026 00:23:06 +0000 (0:00:00.292) 0:00:04.360 ******** 2026-04-04 00:24:51.551346 | orchestrator | changed: [testbed-manager] 2026-04-04 00:24:51.551352 | orchestrator | 2026-04-04 00:24:51.551358 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-04 00:24:51.551364 | orchestrator | Saturday 04 April 2026 00:23:07 +0000 (0:00:00.855) 0:00:05.215 ******** 2026-04-04 00:24:51.551369 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-04 00:24:51.551376 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:51.551382 | orchestrator | 2026-04-04 00:24:51.551388 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-04 00:24:51.551394 | orchestrator | Saturday 04 April 2026 00:23:38 +0000 (0:00:31.379) 0:00:36.594 ******** 2026-04-04 00:24:51.551399 | orchestrator | changed: [testbed-manager] 2026-04-04 00:24:51.551405 | orchestrator | 2026-04-04 00:24:51.551411 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-04 00:24:51.551417 | orchestrator | Saturday 04 April 2026 00:23:50 +0000 (0:00:11.950) 0:00:48.545 ******** 2026-04-04 00:24:51.551423 | orchestrator | Pausing for 60 seconds 2026-04-04 00:24:51.551431 | orchestrator | changed: [testbed-manager] 2026-04-04 00:24:51.551442 | orchestrator | 2026-04-04 00:24:51.551452 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-04 00:24:51.551462 | orchestrator | Saturday 04 April 2026 00:24:50 +0000 (0:01:00.072) 0:01:48.618 ******** 2026-04-04 00:24:51.551497 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:51.551508 | orchestrator | 2026-04-04 00:24:51.551518 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-04 00:24:51.551528 | orchestrator | Saturday 04 April 2026 00:24:50 +0000 (0:00:00.061) 0:01:48.680 ******** 2026-04-04 00:24:51.551534 | orchestrator | changed: [testbed-manager] 2026-04-04 00:24:51.551539 | orchestrator | 2026-04-04 00:24:51.551545 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:24:51.551551 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:24:51.551557 | orchestrator | 2026-04-04 00:24:51.551563 | orchestrator | 2026-04-04 00:24:51.551569 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:24:51.551621 | orchestrator | Saturday 04 April 2026 00:24:51 +0000 (0:00:00.593) 0:01:49.273 ******** 2026-04-04 00:24:51.551630 | orchestrator | =============================================================================== 2026-04-04 00:24:51.551636 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-04-04 00:24:51.551641 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.38s 2026-04-04 00:24:51.551647 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.95s 2026-04-04 00:24:51.551653 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.88s 2026-04-04 00:24:51.551659 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.09s 2026-04-04 00:24:51.551665 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.88s 2026-04-04 00:24:51.551670 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-04-04 00:24:51.551676 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-04-04 00:24:51.551682 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.29s 2026-04-04 00:24:51.551687 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-04 00:24:51.551693 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-04 00:24:51.720865 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:24:51.720958 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-04 00:24:51.727156 | orchestrator | + set -e 2026-04-04 00:24:51.727188 | orchestrator | + NAMESPACE=kolla 2026-04-04 00:24:51.727201 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-04 00:24:51.733790 | orchestrator | ++ semver latest 9.0.0 2026-04-04 00:24:51.788708 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-04 00:24:51.788783 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:24:51.788796 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-04 00:25:03.060676 | orchestrator | 2026-04-04 00:25:03 | INFO  | Prepare task for execution of operator. 2026-04-04 00:25:03.134240 | orchestrator | 2026-04-04 00:25:03 | INFO  | Task 9dea22ee-b18a-4536-a45a-d0c41d40f14f (operator) was prepared for execution. 2026-04-04 00:25:03.134334 | orchestrator | 2026-04-04 00:25:03 | INFO  | It takes a moment until task 9dea22ee-b18a-4536-a45a-d0c41d40f14f (operator) has been started and output is visible here. 2026-04-04 00:25:19.129759 | orchestrator | 2026-04-04 00:25:19.129873 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-04 00:25:19.129891 | orchestrator | 2026-04-04 00:25:19.129903 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:25:19.129916 | orchestrator | Saturday 04 April 2026 00:25:06 +0000 (0:00:00.179) 0:00:00.179 ******** 2026-04-04 00:25:19.129928 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:19.129940 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:19.129950 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:19.129961 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:19.130101 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:19.130119 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:19.130130 | orchestrator | 2026-04-04 00:25:19.130141 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-04 00:25:19.130152 | orchestrator | Saturday 04 April 2026 00:25:09 +0000 (0:00:03.662) 0:00:03.842 ******** 2026-04-04 00:25:19.130163 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:19.130174 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:19.130184 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:19.130195 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:19.130206 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:19.130217 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:19.130228 | orchestrator | 2026-04-04 00:25:19.130239 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-04 00:25:19.130250 | orchestrator | 2026-04-04 00:25:19.130261 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-04 00:25:19.130272 | orchestrator | Saturday 04 April 2026 00:25:10 +0000 (0:00:00.922) 0:00:04.765 ******** 2026-04-04 00:25:19.130283 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:19.130296 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:19.130309 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:19.130322 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:19.130334 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:19.130347 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:19.130360 | orchestrator | 2026-04-04 00:25:19.130373 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-04 00:25:19.130407 | orchestrator | Saturday 04 April 2026 00:25:11 +0000 (0:00:00.167) 0:00:04.932 ******** 2026-04-04 00:25:19.130421 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:19.130433 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:19.130445 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:19.130457 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:19.130470 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:19.130483 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:19.130495 | orchestrator | 2026-04-04 00:25:19.130507 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-04 00:25:19.130520 | orchestrator | Saturday 04 April 2026 00:25:11 +0000 (0:00:00.140) 0:00:05.072 ******** 2026-04-04 00:25:19.130533 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:19.130546 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:19.130559 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:19.130663 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:19.130684 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:19.130702 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:19.130720 | orchestrator | 2026-04-04 00:25:19.130732 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-04 00:25:19.130742 | orchestrator | Saturday 04 April 2026 00:25:11 +0000 (0:00:00.723) 0:00:05.796 ******** 2026-04-04 00:25:19.130753 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:19.130764 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:19.130774 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:19.130785 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:19.130796 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:19.130806 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:19.130817 | orchestrator | 2026-04-04 00:25:19.130828 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-04 00:25:19.130839 | orchestrator | Saturday 04 April 2026 00:25:12 +0000 (0:00:00.991) 0:00:06.788 ******** 2026-04-04 00:25:19.130849 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-04 00:25:19.130861 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-04 00:25:19.130872 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-04 00:25:19.130883 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-04 00:25:19.130894 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-04 00:25:19.130917 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-04 00:25:19.130928 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-04 00:25:19.130939 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-04 00:25:19.130950 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-04 00:25:19.130961 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-04 00:25:19.130971 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-04 00:25:19.130982 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-04 00:25:19.130993 | orchestrator | 2026-04-04 00:25:19.131004 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-04 00:25:19.131015 | orchestrator | Saturday 04 April 2026 00:25:14 +0000 (0:00:01.262) 0:00:08.050 ******** 2026-04-04 00:25:19.131025 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:19.131036 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:19.131047 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:19.131057 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:19.131068 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:19.131079 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:19.131089 | orchestrator | 2026-04-04 00:25:19.131100 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-04 00:25:19.131112 | orchestrator | Saturday 04 April 2026 00:25:15 +0000 (0:00:01.451) 0:00:09.502 ******** 2026-04-04 00:25:19.131123 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131135 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131146 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131157 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131168 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131202 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:25:19.131214 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131225 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131235 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131246 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131257 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131267 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-04 00:25:19.131278 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131289 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-04 00:25:19.131299 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-04 00:25:19.131310 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-04 00:25:19.131321 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131331 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131342 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131353 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131363 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:25:19.131374 | orchestrator | 2026-04-04 00:25:19.131385 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-04 00:25:19.131396 | orchestrator | Saturday 04 April 2026 00:25:16 +0000 (0:00:01.390) 0:00:10.893 ******** 2026-04-04 00:25:19.131407 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:19.131417 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:19.131434 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:19.131446 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:19.131463 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:19.131473 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:19.131484 | orchestrator | 2026-04-04 00:25:19.131495 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-04 00:25:19.131506 | orchestrator | Saturday 04 April 2026 00:25:17 +0000 (0:00:00.139) 0:00:11.032 ******** 2026-04-04 00:25:19.131516 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:19.131527 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:19.131537 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:19.131548 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:19.131558 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:19.131591 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:19.131602 | orchestrator | 2026-04-04 00:25:19.131613 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-04 00:25:19.131624 | orchestrator | Saturday 04 April 2026 00:25:17 +0000 (0:00:00.183) 0:00:11.215 ******** 2026-04-04 00:25:19.131635 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:19.131646 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:19.131656 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:19.131667 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:19.131678 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:19.131688 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:19.131699 | orchestrator | 2026-04-04 00:25:19.131710 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-04 00:25:19.131721 | orchestrator | Saturday 04 April 2026 00:25:17 +0000 (0:00:00.633) 0:00:11.849 ******** 2026-04-04 00:25:19.131731 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:19.131742 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:19.131753 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:19.131763 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:19.131774 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:19.131785 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:19.131795 | orchestrator | 2026-04-04 00:25:19.131806 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-04 00:25:19.131817 | orchestrator | Saturday 04 April 2026 00:25:18 +0000 (0:00:00.176) 0:00:12.025 ******** 2026-04-04 00:25:19.131828 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:25:19.131838 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:19.131849 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-04 00:25:19.131860 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:19.131872 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:25:19.131891 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:19.131911 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:25:19.131930 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:19.131949 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:25:19.131968 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:19.131987 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-04 00:25:19.132008 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:19.132027 | orchestrator | 2026-04-04 00:25:19.132047 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-04 00:25:19.132066 | orchestrator | Saturday 04 April 2026 00:25:18 +0000 (0:00:00.730) 0:00:12.756 ******** 2026-04-04 00:25:19.132082 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:19.132093 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:19.132103 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:19.132114 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:19.132124 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:19.132135 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:19.132146 | orchestrator | 2026-04-04 00:25:19.132156 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-04 00:25:19.132167 | orchestrator | Saturday 04 April 2026 00:25:19 +0000 (0:00:00.147) 0:00:12.903 ******** 2026-04-04 00:25:19.132189 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:19.132200 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:19.132210 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:19.132221 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:19.132242 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:20.302323 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:20.302441 | orchestrator | 2026-04-04 00:25:20.302459 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-04 00:25:20.302472 | orchestrator | Saturday 04 April 2026 00:25:19 +0000 (0:00:00.150) 0:00:13.054 ******** 2026-04-04 00:25:20.302483 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:20.302494 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:20.302505 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:20.302516 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:20.302527 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:20.302538 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:20.302548 | orchestrator | 2026-04-04 00:25:20.302560 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-04 00:25:20.302610 | orchestrator | Saturday 04 April 2026 00:25:19 +0000 (0:00:00.135) 0:00:13.189 ******** 2026-04-04 00:25:20.302638 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:20.302649 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:20.302670 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:20.302681 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:20.302692 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:20.302703 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:20.302714 | orchestrator | 2026-04-04 00:25:20.302724 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-04 00:25:20.302735 | orchestrator | Saturday 04 April 2026 00:25:19 +0000 (0:00:00.634) 0:00:13.824 ******** 2026-04-04 00:25:20.302746 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:20.302757 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:20.302768 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:20.302778 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:20.302789 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:20.302800 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:20.302811 | orchestrator | 2026-04-04 00:25:20.302821 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:25:20.302857 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302873 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302886 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302899 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302912 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302924 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:25:20.302937 | orchestrator | 2026-04-04 00:25:20.302949 | orchestrator | 2026-04-04 00:25:20.302961 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:25:20.302974 | orchestrator | Saturday 04 April 2026 00:25:20 +0000 (0:00:00.200) 0:00:14.025 ******** 2026-04-04 00:25:20.302986 | orchestrator | =============================================================================== 2026-04-04 00:25:20.302999 | orchestrator | Gathering Facts --------------------------------------------------------- 3.66s 2026-04-04 00:25:20.303036 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.45s 2026-04-04 00:25:20.303049 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2026-04-04 00:25:20.303063 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.26s 2026-04-04 00:25:20.303075 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.99s 2026-04-04 00:25:20.303087 | orchestrator | Do not require tty for all users ---------------------------------------- 0.92s 2026-04-04 00:25:20.303100 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-04-04 00:25:20.303112 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.72s 2026-04-04 00:25:20.303125 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-04-04 00:25:20.303137 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-04-04 00:25:20.303150 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-04-04 00:25:20.303162 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-04-04 00:25:20.303175 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-04 00:25:20.303188 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-04 00:25:20.303201 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-04-04 00:25:20.303214 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-04 00:25:20.303226 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-04-04 00:25:20.303237 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-04 00:25:20.303248 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-04 00:25:20.475772 | orchestrator | + osism apply --environment custom facts 2026-04-04 00:25:21.696326 | orchestrator | 2026-04-04 00:25:21 | INFO  | Trying to run play facts in environment custom 2026-04-04 00:25:31.754433 | orchestrator | 2026-04-04 00:25:31 | INFO  | Prepare task for execution of facts. 2026-04-04 00:25:31.831228 | orchestrator | 2026-04-04 00:25:31 | INFO  | Task 202783ba-45ec-40fb-b481-581914b8a820 (facts) was prepared for execution. 2026-04-04 00:25:31.831325 | orchestrator | 2026-04-04 00:25:31 | INFO  | It takes a moment until task 202783ba-45ec-40fb-b481-581914b8a820 (facts) has been started and output is visible here. 2026-04-04 00:26:15.984361 | orchestrator | 2026-04-04 00:26:15.984474 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-04 00:26:15.984487 | orchestrator | 2026-04-04 00:26:15.984492 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:26:15.984497 | orchestrator | Saturday 04 April 2026 00:25:34 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-04-04 00:26:15.984501 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:15.984506 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:15.984510 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.984514 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:15.984518 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.984522 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.984526 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:15.984531 | orchestrator | 2026-04-04 00:26:15.984535 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-04 00:26:15.984539 | orchestrator | Saturday 04 April 2026 00:25:36 +0000 (0:00:01.371) 0:00:01.486 ******** 2026-04-04 00:26:15.984543 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:15.984547 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:15.984551 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:15.984555 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.984616 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.984633 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.984636 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:15.984640 | orchestrator | 2026-04-04 00:26:15.984644 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-04 00:26:15.984648 | orchestrator | 2026-04-04 00:26:15.984651 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:26:15.984655 | orchestrator | Saturday 04 April 2026 00:25:37 +0000 (0:00:01.319) 0:00:02.805 ******** 2026-04-04 00:26:15.984661 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.984667 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.984673 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.984679 | orchestrator | 2026-04-04 00:26:15.984685 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:26:15.984692 | orchestrator | Saturday 04 April 2026 00:25:37 +0000 (0:00:00.098) 0:00:02.903 ******** 2026-04-04 00:26:15.984698 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.984704 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.984710 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.984716 | orchestrator | 2026-04-04 00:26:15.984722 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:26:15.984729 | orchestrator | Saturday 04 April 2026 00:25:37 +0000 (0:00:00.208) 0:00:03.112 ******** 2026-04-04 00:26:15.984735 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.984741 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.984747 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.984753 | orchestrator | 2026-04-04 00:26:15.984759 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:26:15.984766 | orchestrator | Saturday 04 April 2026 00:25:38 +0000 (0:00:00.202) 0:00:03.315 ******** 2026-04-04 00:26:15.984774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:15.984782 | orchestrator | 2026-04-04 00:26:15.984789 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:26:15.984794 | orchestrator | Saturday 04 April 2026 00:25:38 +0000 (0:00:00.143) 0:00:03.458 ******** 2026-04-04 00:26:15.984801 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.984807 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.984814 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.984820 | orchestrator | 2026-04-04 00:26:15.984826 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:26:15.984832 | orchestrator | Saturday 04 April 2026 00:25:38 +0000 (0:00:00.458) 0:00:03.917 ******** 2026-04-04 00:26:15.984837 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:15.984844 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:15.984851 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:15.984858 | orchestrator | 2026-04-04 00:26:15.984865 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:26:15.984871 | orchestrator | Saturday 04 April 2026 00:25:38 +0000 (0:00:00.158) 0:00:04.075 ******** 2026-04-04 00:26:15.984877 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.984883 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.984889 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.984895 | orchestrator | 2026-04-04 00:26:15.984901 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:26:15.984909 | orchestrator | Saturday 04 April 2026 00:25:39 +0000 (0:00:01.067) 0:00:05.143 ******** 2026-04-04 00:26:15.984917 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.984923 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.984930 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.984936 | orchestrator | 2026-04-04 00:26:15.984943 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:26:15.984951 | orchestrator | Saturday 04 April 2026 00:25:40 +0000 (0:00:00.491) 0:00:05.634 ******** 2026-04-04 00:26:15.984965 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.984970 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.984974 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.984979 | orchestrator | 2026-04-04 00:26:15.984983 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:26:15.984988 | orchestrator | Saturday 04 April 2026 00:25:41 +0000 (0:00:01.079) 0:00:06.714 ******** 2026-04-04 00:26:15.984992 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.984996 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.985000 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.985005 | orchestrator | 2026-04-04 00:26:15.985009 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-04 00:26:15.985014 | orchestrator | Saturday 04 April 2026 00:25:58 +0000 (0:00:16.943) 0:00:23.657 ******** 2026-04-04 00:26:15.985018 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:15.985023 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:15.985028 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:15.985032 | orchestrator | 2026-04-04 00:26:15.985037 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-04 00:26:15.985055 | orchestrator | Saturday 04 April 2026 00:25:58 +0000 (0:00:00.079) 0:00:23.737 ******** 2026-04-04 00:26:15.985073 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:15.985077 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:15.985081 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:15.985085 | orchestrator | 2026-04-04 00:26:15.985089 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:26:15.985092 | orchestrator | Saturday 04 April 2026 00:26:06 +0000 (0:00:08.428) 0:00:32.166 ******** 2026-04-04 00:26:15.985096 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.985100 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.985104 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.985108 | orchestrator | 2026-04-04 00:26:15.985112 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-04 00:26:15.985115 | orchestrator | Saturday 04 April 2026 00:26:07 +0000 (0:00:00.462) 0:00:32.628 ******** 2026-04-04 00:26:15.985119 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-04 00:26:15.985124 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-04 00:26:15.985128 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-04 00:26:15.985132 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-04 00:26:15.985136 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-04 00:26:15.985140 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-04 00:26:15.985145 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-04 00:26:15.985152 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-04 00:26:15.985159 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-04 00:26:15.985165 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:26:15.985172 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:26:15.985178 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:26:15.985184 | orchestrator | 2026-04-04 00:26:15.985190 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:26:15.985196 | orchestrator | Saturday 04 April 2026 00:26:11 +0000 (0:00:03.591) 0:00:36.220 ******** 2026-04-04 00:26:15.985202 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.985208 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.985215 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.985222 | orchestrator | 2026-04-04 00:26:15.985229 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:26:15.985234 | orchestrator | 2026-04-04 00:26:15.985247 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:26:15.985253 | orchestrator | Saturday 04 April 2026 00:26:12 +0000 (0:00:01.380) 0:00:37.600 ******** 2026-04-04 00:26:15.985299 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:15.985308 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:15.985316 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:15.985323 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:15.985332 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:15.985339 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:15.985348 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:15.985356 | orchestrator | 2026-04-04 00:26:15.985363 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:26:15.985371 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:26:15.985389 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:26:15.985396 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:26:15.985402 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:26:15.985408 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:26:15.985414 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:26:15.985420 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:26:15.985426 | orchestrator | 2026-04-04 00:26:15.985432 | orchestrator | 2026-04-04 00:26:15.985437 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:26:15.985443 | orchestrator | Saturday 04 April 2026 00:26:15 +0000 (0:00:03.577) 0:00:41.178 ******** 2026-04-04 00:26:15.985448 | orchestrator | =============================================================================== 2026-04-04 00:26:15.985454 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.94s 2026-04-04 00:26:15.985461 | orchestrator | Install required packages (Debian) -------------------------------------- 8.43s 2026-04-04 00:26:15.985468 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2026-04-04 00:26:15.985475 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.58s 2026-04-04 00:26:15.985481 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.38s 2026-04-04 00:26:15.985487 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-04-04 00:26:15.985503 | orchestrator | Copy fact file ---------------------------------------------------------- 1.32s 2026-04-04 00:26:16.162838 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-04-04 00:26:16.162911 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-04-04 00:26:16.162918 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-04-04 00:26:16.162925 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-04-04 00:26:16.162930 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-04-04 00:26:16.162936 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-04 00:26:16.162942 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-04 00:26:16.162947 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-04-04 00:26:16.162971 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-04 00:26:16.162989 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-04 00:26:16.162995 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-04 00:26:16.361915 | orchestrator | + osism apply bootstrap 2026-04-04 00:26:27.666746 | orchestrator | 2026-04-04 00:26:27 | INFO  | Prepare task for execution of bootstrap. 2026-04-04 00:26:27.741924 | orchestrator | 2026-04-04 00:26:27 | INFO  | Task 195c5525-c058-466f-9b90-949b29033255 (bootstrap) was prepared for execution. 2026-04-04 00:26:27.741998 | orchestrator | 2026-04-04 00:26:27 | INFO  | It takes a moment until task 195c5525-c058-466f-9b90-949b29033255 (bootstrap) has been started and output is visible here. 2026-04-04 00:26:43.101707 | orchestrator | 2026-04-04 00:26:43.101794 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-04 00:26:43.101802 | orchestrator | 2026-04-04 00:26:43.101808 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-04 00:26:43.101813 | orchestrator | Saturday 04 April 2026 00:26:30 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-04 00:26:43.101818 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:43.101824 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:43.101829 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:43.101834 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:43.101839 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:43.101844 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:43.101852 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:43.101858 | orchestrator | 2026-04-04 00:26:43.101864 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:26:43.101871 | orchestrator | 2026-04-04 00:26:43.101878 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:26:43.101886 | orchestrator | Saturday 04 April 2026 00:26:31 +0000 (0:00:00.291) 0:00:00.480 ******** 2026-04-04 00:26:43.101893 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:43.101899 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:43.101906 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:43.101913 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:43.101920 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:43.101926 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:43.101933 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:43.101939 | orchestrator | 2026-04-04 00:26:43.101947 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-04 00:26:43.101954 | orchestrator | 2026-04-04 00:26:43.101962 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:26:43.101969 | orchestrator | Saturday 04 April 2026 00:26:35 +0000 (0:00:04.540) 0:00:05.021 ******** 2026-04-04 00:26:43.101978 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-04 00:26:43.101986 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 00:26:43.101994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-04 00:26:43.102000 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-04 00:26:43.102005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:26:43.102010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:26:43.102058 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-04 00:26:43.102068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:26:43.102075 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-04 00:26:43.102108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-04 00:26:43.102117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-04 00:26:43.102125 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-04 00:26:43.102133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-04 00:26:43.102165 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-04 00:26:43.102170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-04 00:26:43.102175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-04 00:26:43.102180 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-04 00:26:43.102184 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:43.102189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-04 00:26:43.102194 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-04 00:26:43.102199 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-04 00:26:43.102205 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:43.102210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-04 00:26:43.102216 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-04 00:26:43.102221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-04 00:26:43.102226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-04 00:26:43.102232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:26:43.102237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:26:43.102243 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-04 00:26:43.102248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-04 00:26:43.102253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:26:43.102258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:26:43.102263 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-04 00:26:43.102269 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-04 00:26:43.102274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:26:43.102279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:26:43.102285 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-04 00:26:43.102290 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:43.102296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-04 00:26:43.102303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-04 00:26:43.102312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:26:43.102317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:26:43.102322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-04 00:26:43.102327 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:43.102331 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-04 00:26:43.102336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:26:43.102355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:26:43.102360 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-04 00:26:43.102365 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:43.102370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-04 00:26:43.102375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:26:43.102382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-04 00:26:43.102389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:26:43.102394 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:43.102398 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-04 00:26:43.102403 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:43.102408 | orchestrator | 2026-04-04 00:26:43.102412 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-04 00:26:43.102417 | orchestrator | 2026-04-04 00:26:43.102421 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-04 00:26:43.102431 | orchestrator | Saturday 04 April 2026 00:26:36 +0000 (0:00:00.434) 0:00:05.456 ******** 2026-04-04 00:26:43.102436 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:43.102440 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:43.102445 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:43.102450 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:43.102457 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:43.102464 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:43.102469 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:43.102473 | orchestrator | 2026-04-04 00:26:43.102478 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-04 00:26:43.102483 | orchestrator | Saturday 04 April 2026 00:26:37 +0000 (0:00:01.304) 0:00:06.760 ******** 2026-04-04 00:26:43.102487 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:43.102492 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:43.102496 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:43.102501 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:43.102505 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:43.102510 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:43.102514 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:43.102520 | orchestrator | 2026-04-04 00:26:43.102528 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-04 00:26:43.102534 | orchestrator | Saturday 04 April 2026 00:26:38 +0000 (0:00:01.166) 0:00:07.926 ******** 2026-04-04 00:26:43.102540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:43.102547 | orchestrator | 2026-04-04 00:26:43.102551 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-04 00:26:43.102556 | orchestrator | Saturday 04 April 2026 00:26:38 +0000 (0:00:00.263) 0:00:08.190 ******** 2026-04-04 00:26:43.102561 | orchestrator | changed: [testbed-manager] 2026-04-04 00:26:43.102565 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:43.102570 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:43.102574 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:43.102602 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:43.102610 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:43.102618 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:43.102624 | orchestrator | 2026-04-04 00:26:43.102631 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-04 00:26:43.102638 | orchestrator | Saturday 04 April 2026 00:26:40 +0000 (0:00:01.505) 0:00:09.696 ******** 2026-04-04 00:26:43.102643 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:43.102649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:43.102678 | orchestrator | 2026-04-04 00:26:43.102683 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-04 00:26:43.102688 | orchestrator | Saturday 04 April 2026 00:26:40 +0000 (0:00:00.307) 0:00:10.003 ******** 2026-04-04 00:26:43.102696 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:43.102716 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:43.102721 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:43.102726 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:43.102730 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:43.102735 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:43.102739 | orchestrator | 2026-04-04 00:26:43.102744 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-04 00:26:43.102748 | orchestrator | Saturday 04 April 2026 00:26:41 +0000 (0:00:01.140) 0:00:11.144 ******** 2026-04-04 00:26:43.102753 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:43.102760 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:43.102773 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:43.102778 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:43.102782 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:43.102787 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:43.102791 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:43.102795 | orchestrator | 2026-04-04 00:26:43.102800 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-04 00:26:43.102808 | orchestrator | Saturday 04 April 2026 00:26:42 +0000 (0:00:00.639) 0:00:11.783 ******** 2026-04-04 00:26:43.102812 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:43.102817 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:43.102821 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:43.102826 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:43.102831 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:43.102838 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:43.102846 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:43.102851 | orchestrator | 2026-04-04 00:26:43.102859 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-04 00:26:43.102866 | orchestrator | Saturday 04 April 2026 00:26:42 +0000 (0:00:00.411) 0:00:12.194 ******** 2026-04-04 00:26:43.102871 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:43.102875 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:43.102885 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:55.464059 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:55.464163 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:55.464173 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:55.464179 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:55.464186 | orchestrator | 2026-04-04 00:26:55.464193 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-04 00:26:55.464202 | orchestrator | Saturday 04 April 2026 00:26:43 +0000 (0:00:00.193) 0:00:12.388 ******** 2026-04-04 00:26:55.464211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:55.464232 | orchestrator | 2026-04-04 00:26:55.464239 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-04 00:26:55.464247 | orchestrator | Saturday 04 April 2026 00:26:43 +0000 (0:00:00.272) 0:00:12.661 ******** 2026-04-04 00:26:55.464254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:55.464260 | orchestrator | 2026-04-04 00:26:55.464267 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-04 00:26:55.464273 | orchestrator | Saturday 04 April 2026 00:26:43 +0000 (0:00:00.302) 0:00:12.963 ******** 2026-04-04 00:26:55.464280 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464289 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464293 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464297 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464301 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464306 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464309 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464313 | orchestrator | 2026-04-04 00:26:55.464318 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-04 00:26:55.464322 | orchestrator | Saturday 04 April 2026 00:26:45 +0000 (0:00:01.644) 0:00:14.608 ******** 2026-04-04 00:26:55.464326 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:55.464330 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:55.464334 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:55.464338 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:55.464342 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:55.464365 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:55.464369 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:55.464373 | orchestrator | 2026-04-04 00:26:55.464380 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-04 00:26:55.464386 | orchestrator | Saturday 04 April 2026 00:26:45 +0000 (0:00:00.239) 0:00:14.847 ******** 2026-04-04 00:26:55.464391 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464396 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464402 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464408 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464414 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464419 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464425 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464430 | orchestrator | 2026-04-04 00:26:55.464436 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-04 00:26:55.464442 | orchestrator | Saturday 04 April 2026 00:26:46 +0000 (0:00:00.648) 0:00:15.496 ******** 2026-04-04 00:26:55.464447 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:55.464453 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:55.464459 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:55.464465 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:55.464471 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:55.464477 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:55.464483 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:55.464489 | orchestrator | 2026-04-04 00:26:55.464497 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-04 00:26:55.464504 | orchestrator | Saturday 04 April 2026 00:26:46 +0000 (0:00:00.222) 0:00:15.718 ******** 2026-04-04 00:26:55.464510 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464517 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:55.464523 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:55.464530 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:55.464537 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:55.464541 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:55.464545 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:55.464551 | orchestrator | 2026-04-04 00:26:55.464557 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-04 00:26:55.464563 | orchestrator | Saturday 04 April 2026 00:26:47 +0000 (0:00:00.580) 0:00:16.299 ******** 2026-04-04 00:26:55.464568 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464575 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:55.464603 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:55.464610 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:55.464619 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:55.464630 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:55.464637 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:55.464643 | orchestrator | 2026-04-04 00:26:55.464665 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-04 00:26:55.464673 | orchestrator | Saturday 04 April 2026 00:26:48 +0000 (0:00:01.227) 0:00:17.526 ******** 2026-04-04 00:26:55.464680 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464688 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464694 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464699 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464704 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464713 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464723 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464729 | orchestrator | 2026-04-04 00:26:55.464735 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-04 00:26:55.464742 | orchestrator | Saturday 04 April 2026 00:26:49 +0000 (0:00:01.171) 0:00:18.698 ******** 2026-04-04 00:26:55.464769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:55.464788 | orchestrator | 2026-04-04 00:26:55.464793 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-04 00:26:55.464798 | orchestrator | Saturday 04 April 2026 00:26:49 +0000 (0:00:00.287) 0:00:18.985 ******** 2026-04-04 00:26:55.464802 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:55.464807 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:26:55.464811 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:55.464815 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:26:55.464820 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:55.464824 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:55.464828 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:26:55.464833 | orchestrator | 2026-04-04 00:26:55.464837 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:26:55.464841 | orchestrator | Saturday 04 April 2026 00:26:51 +0000 (0:00:01.294) 0:00:20.280 ******** 2026-04-04 00:26:55.464845 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464849 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464854 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464858 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464862 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464866 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464871 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464875 | orchestrator | 2026-04-04 00:26:55.464880 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:26:55.464884 | orchestrator | Saturday 04 April 2026 00:26:51 +0000 (0:00:00.215) 0:00:20.495 ******** 2026-04-04 00:26:55.464888 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464892 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464897 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464901 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464905 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464935 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464940 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464944 | orchestrator | 2026-04-04 00:26:55.464949 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:26:55.464954 | orchestrator | Saturday 04 April 2026 00:26:51 +0000 (0:00:00.241) 0:00:20.737 ******** 2026-04-04 00:26:55.464958 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.464962 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.464967 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.464971 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.464976 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.464980 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.464984 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.464989 | orchestrator | 2026-04-04 00:26:55.465002 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:26:55.465006 | orchestrator | Saturday 04 April 2026 00:26:51 +0000 (0:00:00.199) 0:00:20.936 ******** 2026-04-04 00:26:55.465011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:26:55.465017 | orchestrator | 2026-04-04 00:26:55.465020 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:26:55.465024 | orchestrator | Saturday 04 April 2026 00:26:52 +0000 (0:00:00.274) 0:00:21.211 ******** 2026-04-04 00:26:55.465028 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.465032 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.465036 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.465039 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.465043 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.465047 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.465050 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.465054 | orchestrator | 2026-04-04 00:26:55.465058 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:26:55.465067 | orchestrator | Saturday 04 April 2026 00:26:52 +0000 (0:00:00.539) 0:00:21.751 ******** 2026-04-04 00:26:55.465087 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:26:55.465091 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:26:55.465095 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:26:55.465099 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:26:55.465103 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:26:55.465107 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:26:55.465110 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:26:55.465114 | orchestrator | 2026-04-04 00:26:55.465118 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:26:55.465122 | orchestrator | Saturday 04 April 2026 00:26:52 +0000 (0:00:00.205) 0:00:21.956 ******** 2026-04-04 00:26:55.465125 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.465129 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:26:55.465133 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:55.465137 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.465140 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.465144 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.465148 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:26:55.465152 | orchestrator | 2026-04-04 00:26:55.465155 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:26:55.465160 | orchestrator | Saturday 04 April 2026 00:26:53 +0000 (0:00:01.138) 0:00:23.095 ******** 2026-04-04 00:26:55.465163 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.465167 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:26:55.465171 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:26:55.465175 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.465179 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.465182 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:26:55.465186 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:26:55.465190 | orchestrator | 2026-04-04 00:26:55.465208 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:26:55.465213 | orchestrator | Saturday 04 April 2026 00:26:54 +0000 (0:00:00.552) 0:00:23.648 ******** 2026-04-04 00:26:55.465217 | orchestrator | ok: [testbed-manager] 2026-04-04 00:26:55.465220 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:26:55.465224 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:26:55.465228 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:26:55.465237 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.354579 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.354759 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.354771 | orchestrator | 2026-04-04 00:27:38.354780 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:27:38.354789 | orchestrator | Saturday 04 April 2026 00:26:55 +0000 (0:00:01.099) 0:00:24.747 ******** 2026-04-04 00:27:38.354800 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.354814 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.354828 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.354841 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:38.354855 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.354869 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.354879 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:38.354886 | orchestrator | 2026-04-04 00:27:38.354895 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-04 00:27:38.354903 | orchestrator | Saturday 04 April 2026 00:27:13 +0000 (0:00:18.334) 0:00:43.081 ******** 2026-04-04 00:27:38.354911 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.354918 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.354925 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.354932 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.354940 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.354947 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.354954 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.354983 | orchestrator | 2026-04-04 00:27:38.354991 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-04 00:27:38.354999 | orchestrator | Saturday 04 April 2026 00:27:14 +0000 (0:00:00.234) 0:00:43.315 ******** 2026-04-04 00:27:38.355006 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355013 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.355021 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355028 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.355035 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.355042 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.355049 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.355056 | orchestrator | 2026-04-04 00:27:38.355064 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-04 00:27:38.355071 | orchestrator | Saturday 04 April 2026 00:27:14 +0000 (0:00:00.230) 0:00:43.545 ******** 2026-04-04 00:27:38.355078 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355085 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.355092 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355099 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.355107 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.355115 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.355123 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.355131 | orchestrator | 2026-04-04 00:27:38.355139 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-04 00:27:38.355148 | orchestrator | Saturday 04 April 2026 00:27:14 +0000 (0:00:00.197) 0:00:43.743 ******** 2026-04-04 00:27:38.355158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:27:38.355168 | orchestrator | 2026-04-04 00:27:38.355176 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-04 00:27:38.355200 | orchestrator | Saturday 04 April 2026 00:27:14 +0000 (0:00:00.244) 0:00:43.987 ******** 2026-04-04 00:27:38.355209 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355217 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355226 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.355234 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.355242 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.355251 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.355259 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.355268 | orchestrator | 2026-04-04 00:27:38.355278 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-04 00:27:38.355290 | orchestrator | Saturday 04 April 2026 00:27:16 +0000 (0:00:02.063) 0:00:46.050 ******** 2026-04-04 00:27:38.355301 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:38.355314 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:38.355326 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:38.355338 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.355350 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:38.355362 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:38.355372 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.355383 | orchestrator | 2026-04-04 00:27:38.355391 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-04 00:27:38.355398 | orchestrator | Saturday 04 April 2026 00:27:17 +0000 (0:00:01.127) 0:00:47.178 ******** 2026-04-04 00:27:38.355405 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.355412 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.355419 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355426 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.355433 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.355441 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.355448 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355455 | orchestrator | 2026-04-04 00:27:38.355462 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-04 00:27:38.355476 | orchestrator | Saturday 04 April 2026 00:27:19 +0000 (0:00:01.565) 0:00:48.744 ******** 2026-04-04 00:27:38.355489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:27:38.355498 | orchestrator | 2026-04-04 00:27:38.355505 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-04 00:27:38.355514 | orchestrator | Saturday 04 April 2026 00:27:19 +0000 (0:00:00.242) 0:00:48.987 ******** 2026-04-04 00:27:38.355521 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:38.355528 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:38.355536 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.355543 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.355550 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:38.355557 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:38.355564 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:38.355572 | orchestrator | 2026-04-04 00:27:38.355617 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-04 00:27:38.355627 | orchestrator | Saturday 04 April 2026 00:27:20 +0000 (0:00:01.075) 0:00:50.062 ******** 2026-04-04 00:27:38.355634 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:27:38.355642 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:27:38.355649 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:27:38.355657 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:27:38.355664 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:27:38.355671 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:27:38.355679 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:27:38.355686 | orchestrator | 2026-04-04 00:27:38.355693 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-04 00:27:38.355701 | orchestrator | Saturday 04 April 2026 00:27:21 +0000 (0:00:00.185) 0:00:50.247 ******** 2026-04-04 00:27:38.355708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:27:38.355716 | orchestrator | 2026-04-04 00:27:38.355723 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-04 00:27:38.355731 | orchestrator | Saturday 04 April 2026 00:27:21 +0000 (0:00:00.248) 0:00:50.496 ******** 2026-04-04 00:27:38.355738 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355745 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355753 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.355760 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.355767 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.355775 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.355782 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.355789 | orchestrator | 2026-04-04 00:27:38.355796 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-04 00:27:38.355804 | orchestrator | Saturday 04 April 2026 00:27:23 +0000 (0:00:02.134) 0:00:52.631 ******** 2026-04-04 00:27:38.355811 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:38.355818 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.355826 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:38.355833 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:38.355840 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.355848 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:38.355855 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:38.355862 | orchestrator | 2026-04-04 00:27:38.355869 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-04 00:27:38.355877 | orchestrator | Saturday 04 April 2026 00:27:24 +0000 (0:00:01.205) 0:00:53.836 ******** 2026-04-04 00:27:38.355884 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:38.355902 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:38.355923 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:38.355930 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:38.355938 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:38.355945 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:38.355952 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:38.355959 | orchestrator | 2026-04-04 00:27:38.355967 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-04 00:27:38.355974 | orchestrator | Saturday 04 April 2026 00:27:35 +0000 (0:00:10.886) 0:01:04.723 ******** 2026-04-04 00:27:38.355981 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.355989 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.355996 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.356003 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.356011 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.356018 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.356025 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.356032 | orchestrator | 2026-04-04 00:27:38.356039 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-04 00:27:38.356047 | orchestrator | Saturday 04 April 2026 00:27:36 +0000 (0:00:01.123) 0:01:05.846 ******** 2026-04-04 00:27:38.356054 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.356061 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.356068 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.356075 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.356083 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.356090 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.356097 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.356104 | orchestrator | 2026-04-04 00:27:38.356111 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-04 00:27:38.356119 | orchestrator | Saturday 04 April 2026 00:27:37 +0000 (0:00:01.098) 0:01:06.945 ******** 2026-04-04 00:27:38.356126 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.356133 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.356140 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.356147 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.356155 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.356162 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.356169 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.356176 | orchestrator | 2026-04-04 00:27:38.356184 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-04 00:27:38.356191 | orchestrator | Saturday 04 April 2026 00:27:37 +0000 (0:00:00.179) 0:01:07.124 ******** 2026-04-04 00:27:38.356198 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:38.356205 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:38.356217 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:38.356224 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:38.356232 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:38.356239 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:38.356246 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:38.356253 | orchestrator | 2026-04-04 00:27:38.356261 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-04 00:27:38.356268 | orchestrator | Saturday 04 April 2026 00:27:38 +0000 (0:00:00.186) 0:01:07.310 ******** 2026-04-04 00:27:38.356276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:27:38.356283 | orchestrator | 2026-04-04 00:27:38.356296 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-04 00:29:49.737992 | orchestrator | Saturday 04 April 2026 00:27:38 +0000 (0:00:00.241) 0:01:07.552 ******** 2026-04-04 00:29:49.738251 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.738286 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.738301 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.738341 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.738393 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.738416 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.738438 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.738459 | orchestrator | 2026-04-04 00:29:49.738481 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-04 00:29:49.738504 | orchestrator | Saturday 04 April 2026 00:27:40 +0000 (0:00:02.109) 0:01:09.661 ******** 2026-04-04 00:29:49.738528 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:49.738548 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:49.738569 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:49.738590 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:49.738611 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:49.738688 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:49.738701 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:49.738714 | orchestrator | 2026-04-04 00:29:49.738727 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-04 00:29:49.738741 | orchestrator | Saturday 04 April 2026 00:27:41 +0000 (0:00:00.684) 0:01:10.346 ******** 2026-04-04 00:29:49.738754 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.738767 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.738780 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.738792 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.738805 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.738818 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.738830 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.738841 | orchestrator | 2026-04-04 00:29:49.738852 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-04 00:29:49.738863 | orchestrator | Saturday 04 April 2026 00:27:41 +0000 (0:00:00.186) 0:01:10.532 ******** 2026-04-04 00:29:49.738873 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.738884 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.738894 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.738905 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.738915 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.738926 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.738936 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.738947 | orchestrator | 2026-04-04 00:29:49.738958 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-04 00:29:49.738968 | orchestrator | Saturday 04 April 2026 00:27:42 +0000 (0:00:01.465) 0:01:11.998 ******** 2026-04-04 00:29:49.738979 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:49.738990 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:49.739001 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:49.739011 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:49.739022 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:49.739033 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:49.739043 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:49.739054 | orchestrator | 2026-04-04 00:29:49.739065 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-04 00:29:49.739076 | orchestrator | Saturday 04 April 2026 00:27:45 +0000 (0:00:02.228) 0:01:14.226 ******** 2026-04-04 00:29:49.739087 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.739097 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.739108 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.739119 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.739130 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.739140 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.739151 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.739162 | orchestrator | 2026-04-04 00:29:49.739173 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-04 00:29:49.739183 | orchestrator | Saturday 04 April 2026 00:27:48 +0000 (0:00:03.360) 0:01:17.587 ******** 2026-04-04 00:29:49.739195 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.739215 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.739233 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.739253 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.739287 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.739304 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.739324 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.739342 | orchestrator | 2026-04-04 00:29:49.739355 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-04 00:29:49.739365 | orchestrator | Saturday 04 April 2026 00:28:23 +0000 (0:00:34.781) 0:01:52.370 ******** 2026-04-04 00:29:49.739376 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:49.739387 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:49.739398 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:49.739409 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:49.739420 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:49.739431 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:49.739441 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:49.739452 | orchestrator | 2026-04-04 00:29:49.739463 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-04 00:29:49.739474 | orchestrator | Saturday 04 April 2026 00:29:36 +0000 (0:01:13.079) 0:03:05.450 ******** 2026-04-04 00:29:49.739485 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:49.739496 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.739506 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.739517 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.739528 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.739539 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.739550 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.739561 | orchestrator | 2026-04-04 00:29:49.739572 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-04 00:29:49.739583 | orchestrator | Saturday 04 April 2026 00:29:38 +0000 (0:00:02.340) 0:03:07.791 ******** 2026-04-04 00:29:49.739594 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:49.739605 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:49.739641 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:49.739654 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:49.739665 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:49.739675 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:49.739686 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:49.739697 | orchestrator | 2026-04-04 00:29:49.739710 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-04 00:29:49.739729 | orchestrator | Saturday 04 April 2026 00:29:48 +0000 (0:00:10.133) 0:03:17.925 ******** 2026-04-04 00:29:49.739803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-04 00:29:49.739842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-04 00:29:49.739866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-04 00:29:49.739888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-04 00:29:49.739915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-04 00:29:49.739932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-04 00:29:49.739951 | orchestrator | 2026-04-04 00:29:49.739970 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-04 00:29:49.739990 | orchestrator | Saturday 04 April 2026 00:29:49 +0000 (0:00:00.324) 0:03:18.250 ******** 2026-04-04 00:29:49.740009 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:29:49.740027 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:49.740046 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:29:49.740058 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:29:49.740069 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:49.740080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:49.740090 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:29:49.740115 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:49.740134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:29:49.740153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:29:49.740171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:29:49.740191 | orchestrator | 2026-04-04 00:29:49.740209 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-04 00:29:49.740235 | orchestrator | Saturday 04 April 2026 00:29:49 +0000 (0:00:00.619) 0:03:18.869 ******** 2026-04-04 00:29:49.740249 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:29:49.740262 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:29:49.740273 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:29:49.740284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:29:49.740294 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:29:49.740316 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:29:58.059512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:29:58.059694 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:29:58.059713 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:29:58.059752 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:29:58.059760 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:58.059767 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:29:58.059826 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:29:58.059833 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:29:58.059838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:29:58.059842 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:29:58.059847 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:29:58.059863 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:29:58.059868 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:29:58.059873 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:29:58.059877 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:29:58.059882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:29:58.059886 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:29:58.059892 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:29:58.059897 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:29:58.059901 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:29:58.059906 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:29:58.059910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:29:58.059915 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:29:58.059919 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:29:58.059924 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:29:58.059928 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:58.059933 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:58.059937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:29:58.059942 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:29:58.059947 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:29:58.059951 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:29:58.059956 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:29:58.059960 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:29:58.059965 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:29:58.059969 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:29:58.059974 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:29:58.059989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:29:58.059994 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:58.059999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:29:58.060009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:29:58.060015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:29:58.060021 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:29:58.060026 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:29:58.060047 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:29:58.060052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:29:58.060058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060069 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:29:58.060075 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:29:58.060092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:29:58.060097 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:29:58.060103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:29:58.060108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:29:58.060114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:29:58.060119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:29:58.060130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:29:58.060135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:29:58.060141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:29:58.060146 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:29:58.060152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:29:58.060157 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:29:58.060162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:29:58.060168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:29:58.060174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:29:58.060179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:29:58.060185 | orchestrator | 2026-04-04 00:29:58.060191 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-04 00:29:58.060196 | orchestrator | Saturday 04 April 2026 00:29:56 +0000 (0:00:07.076) 0:03:25.946 ******** 2026-04-04 00:29:58.060202 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060217 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060233 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:29:58.060244 | orchestrator | 2026-04-04 00:29:58.060250 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-04 00:29:58.060256 | orchestrator | Saturday 04 April 2026 00:29:57 +0000 (0:00:00.731) 0:03:26.677 ******** 2026-04-04 00:29:58.060261 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:29:58.060269 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:58.060275 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:29:58.060280 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:58.060286 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:29:58.060291 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:29:58.060297 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:58.060302 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:58.060308 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:29:58.060314 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:29:58.060323 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:30:10.269480 | orchestrator | 2026-04-04 00:30:10.269582 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-04 00:30:10.269597 | orchestrator | Saturday 04 April 2026 00:29:58 +0000 (0:00:00.614) 0:03:27.292 ******** 2026-04-04 00:30:10.269607 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:30:10.269718 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:10.269731 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:30:10.269741 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:30:10.269750 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:10.269758 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:10.269767 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:30:10.269776 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:10.269785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:30:10.269793 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:30:10.269802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:30:10.269811 | orchestrator | 2026-04-04 00:30:10.269820 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-04 00:30:10.269828 | orchestrator | Saturday 04 April 2026 00:29:58 +0000 (0:00:00.551) 0:03:27.844 ******** 2026-04-04 00:30:10.269837 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:30:10.269846 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:10.269854 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:30:10.269888 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:10.269897 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:30:10.269906 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:30:10.269914 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:10.269923 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:10.269932 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:30:10.269941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:30:10.269949 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:30:10.269957 | orchestrator | 2026-04-04 00:30:10.269966 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-04 00:30:10.269975 | orchestrator | Saturday 04 April 2026 00:29:59 +0000 (0:00:00.758) 0:03:28.603 ******** 2026-04-04 00:30:10.269984 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:10.269993 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:10.270003 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:10.270014 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:10.270070 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:10.270080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:10.270090 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:10.270100 | orchestrator | 2026-04-04 00:30:10.270110 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-04 00:30:10.270119 | orchestrator | Saturday 04 April 2026 00:29:59 +0000 (0:00:00.281) 0:03:28.885 ******** 2026-04-04 00:30:10.270129 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:30:10.270140 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:30:10.270150 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:30:10.270160 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:10.270169 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:30:10.270179 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:30:10.270188 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:30:10.270198 | orchestrator | 2026-04-04 00:30:10.270208 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-04 00:30:10.270218 | orchestrator | Saturday 04 April 2026 00:30:04 +0000 (0:00:04.930) 0:03:33.816 ******** 2026-04-04 00:30:10.270229 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-04 00:30:10.270238 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-04 00:30:10.270248 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:10.270258 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-04 00:30:10.270268 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:10.270279 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-04 00:30:10.270289 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:10.270298 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-04 00:30:10.270308 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:10.270318 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:10.270327 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-04 00:30:10.270338 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:10.270348 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-04 00:30:10.270358 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:10.270366 | orchestrator | 2026-04-04 00:30:10.270375 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-04 00:30:10.270383 | orchestrator | Saturday 04 April 2026 00:30:04 +0000 (0:00:00.276) 0:03:34.093 ******** 2026-04-04 00:30:10.270392 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-04 00:30:10.270401 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-04 00:30:10.270409 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-04 00:30:10.270443 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-04 00:30:10.270453 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-04 00:30:10.270461 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-04 00:30:10.270470 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-04 00:30:10.270478 | orchestrator | 2026-04-04 00:30:10.270488 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-04 00:30:10.270503 | orchestrator | Saturday 04 April 2026 00:30:05 +0000 (0:00:01.075) 0:03:35.168 ******** 2026-04-04 00:30:10.270518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:30:10.270533 | orchestrator | 2026-04-04 00:30:10.270546 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-04 00:30:10.270560 | orchestrator | Saturday 04 April 2026 00:30:06 +0000 (0:00:00.385) 0:03:35.553 ******** 2026-04-04 00:30:10.270574 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:10.270588 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:30:10.270602 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:30:10.270644 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:30:10.270659 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:30:10.270673 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:30:10.270687 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:30:10.270707 | orchestrator | 2026-04-04 00:30:10.270724 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-04 00:30:10.270738 | orchestrator | Saturday 04 April 2026 00:30:07 +0000 (0:00:01.454) 0:03:37.008 ******** 2026-04-04 00:30:10.270752 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:10.270767 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:30:10.270779 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:30:10.270792 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:30:10.270806 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:30:10.270818 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:30:10.270830 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:30:10.270844 | orchestrator | 2026-04-04 00:30:10.270878 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-04 00:30:10.270892 | orchestrator | Saturday 04 April 2026 00:30:08 +0000 (0:00:00.620) 0:03:37.629 ******** 2026-04-04 00:30:10.270905 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:10.270918 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:10.270930 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:10.270943 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:10.270956 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:10.270969 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:10.270982 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:10.270996 | orchestrator | 2026-04-04 00:30:10.271011 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-04 00:30:10.271025 | orchestrator | Saturday 04 April 2026 00:30:09 +0000 (0:00:00.621) 0:03:38.251 ******** 2026-04-04 00:30:10.271037 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:10.271050 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:30:10.271063 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:30:10.271075 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:30:10.271088 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:30:10.271102 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:30:10.271116 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:30:10.271130 | orchestrator | 2026-04-04 00:30:10.271146 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-04 00:30:10.271160 | orchestrator | Saturday 04 April 2026 00:30:09 +0000 (0:00:00.663) 0:03:38.914 ******** 2026-04-04 00:30:10.271177 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261143.2628903, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:10.271221 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261169.1723247, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:10.271238 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261143.8584228, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:10.271273 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261154.4350638, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.992958 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261172.1659286, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993093 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261180.666819, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993121 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261139.1481833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993144 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993200 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993239 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993259 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993315 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993336 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993355 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:30:15.993374 | orchestrator | 2026-04-04 00:30:15.993395 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-04 00:30:15.993416 | orchestrator | Saturday 04 April 2026 00:30:10 +0000 (0:00:01.088) 0:03:40.002 ******** 2026-04-04 00:30:15.993449 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:15.993499 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:15.993518 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:15.993537 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:15.993555 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:15.993573 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:15.993592 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:15.993609 | orchestrator | 2026-04-04 00:30:15.993670 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-04 00:30:15.993690 | orchestrator | Saturday 04 April 2026 00:30:11 +0000 (0:00:01.177) 0:03:41.180 ******** 2026-04-04 00:30:15.993709 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:15.993728 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:15.993746 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:15.993762 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:15.993780 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:15.993799 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:15.993819 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:15.993837 | orchestrator | 2026-04-04 00:30:15.993855 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-04 00:30:15.993874 | orchestrator | Saturday 04 April 2026 00:30:13 +0000 (0:00:01.266) 0:03:42.447 ******** 2026-04-04 00:30:15.993892 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:15.993910 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:15.993927 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:15.993946 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:15.993963 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:15.993981 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:15.993998 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:15.994067 | orchestrator | 2026-04-04 00:30:15.994129 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-04 00:30:15.994160 | orchestrator | Saturday 04 April 2026 00:30:14 +0000 (0:00:01.220) 0:03:43.667 ******** 2026-04-04 00:30:15.994178 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:15.994196 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:15.994215 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:15.994233 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:15.994253 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:15.994270 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:15.994289 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:15.994308 | orchestrator | 2026-04-04 00:30:15.994326 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-04 00:30:15.994344 | orchestrator | Saturday 04 April 2026 00:30:14 +0000 (0:00:00.349) 0:03:44.017 ******** 2026-04-04 00:30:15.994363 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:15.994382 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:30:15.994400 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:30:15.994418 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:30:15.994438 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:30:15.994458 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:30:15.994477 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:30:15.994496 | orchestrator | 2026-04-04 00:30:15.994516 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-04 00:30:15.994535 | orchestrator | Saturday 04 April 2026 00:30:15 +0000 (0:00:00.820) 0:03:44.838 ******** 2026-04-04 00:30:15.994556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:30:15.994577 | orchestrator | 2026-04-04 00:30:15.994596 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-04 00:30:15.994667 | orchestrator | Saturday 04 April 2026 00:30:15 +0000 (0:00:00.350) 0:03:45.188 ******** 2026-04-04 00:31:38.313249 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313338 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.313349 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.313356 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.313363 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.313370 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.313378 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.313385 | orchestrator | 2026-04-04 00:31:38.313393 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-04 00:31:38.313401 | orchestrator | Saturday 04 April 2026 00:30:25 +0000 (0:00:09.536) 0:03:54.725 ******** 2026-04-04 00:31:38.313408 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313415 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313421 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313428 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313435 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313441 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313448 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313454 | orchestrator | 2026-04-04 00:31:38.313461 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-04 00:31:38.313468 | orchestrator | Saturday 04 April 2026 00:30:27 +0000 (0:00:01.571) 0:03:56.296 ******** 2026-04-04 00:31:38.313475 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313481 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313488 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313495 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313501 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313508 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313514 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313521 | orchestrator | 2026-04-04 00:31:38.313594 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-04 00:31:38.313607 | orchestrator | Saturday 04 April 2026 00:30:28 +0000 (0:00:01.060) 0:03:57.357 ******** 2026-04-04 00:31:38.313614 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313620 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313627 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313634 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313640 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313646 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313653 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313659 | orchestrator | 2026-04-04 00:31:38.313666 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-04 00:31:38.313674 | orchestrator | Saturday 04 April 2026 00:30:28 +0000 (0:00:00.275) 0:03:57.633 ******** 2026-04-04 00:31:38.313681 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313687 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313694 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313700 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313706 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313713 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313719 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313726 | orchestrator | 2026-04-04 00:31:38.313733 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-04 00:31:38.313739 | orchestrator | Saturday 04 April 2026 00:30:28 +0000 (0:00:00.288) 0:03:57.921 ******** 2026-04-04 00:31:38.313746 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313753 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313759 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313766 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313773 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313780 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313788 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313795 | orchestrator | 2026-04-04 00:31:38.313803 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-04 00:31:38.313814 | orchestrator | Saturday 04 April 2026 00:30:28 +0000 (0:00:00.271) 0:03:58.193 ******** 2026-04-04 00:31:38.313853 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.313862 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.313870 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.313877 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.313885 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.313893 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.313901 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.313909 | orchestrator | 2026-04-04 00:31:38.313917 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-04 00:31:38.313924 | orchestrator | Saturday 04 April 2026 00:30:33 +0000 (0:00:04.789) 0:04:02.982 ******** 2026-04-04 00:31:38.313933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:38.313942 | orchestrator | 2026-04-04 00:31:38.313951 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-04 00:31:38.313959 | orchestrator | Saturday 04 April 2026 00:30:34 +0000 (0:00:00.397) 0:04:03.380 ******** 2026-04-04 00:31:38.313968 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.313976 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-04 00:31:38.313983 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.313991 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-04 00:31:38.313999 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.314007 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.314060 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-04 00:31:38.314068 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.314076 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.314084 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-04 00:31:38.314093 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.314100 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.314108 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-04 00:31:38.314117 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.314125 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.314133 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-04 00:31:38.314155 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.314164 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.314171 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-04 00:31:38.314178 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-04 00:31:38.314184 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.314191 | orchestrator | 2026-04-04 00:31:38.314198 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-04 00:31:38.314204 | orchestrator | Saturday 04 April 2026 00:30:34 +0000 (0:00:00.334) 0:04:03.715 ******** 2026-04-04 00:31:38.314211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:38.314218 | orchestrator | 2026-04-04 00:31:38.314225 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-04 00:31:38.314231 | orchestrator | Saturday 04 April 2026 00:30:34 +0000 (0:00:00.448) 0:04:04.163 ******** 2026-04-04 00:31:38.314238 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-04 00:31:38.314245 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-04 00:31:38.314251 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.314264 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-04 00:31:38.314271 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.314277 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.314297 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-04 00:31:38.314304 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-04 00:31:38.314311 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.314317 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-04 00:31:38.314324 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.314331 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.314337 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-04 00:31:38.314344 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.314350 | orchestrator | 2026-04-04 00:31:38.314357 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-04 00:31:38.314364 | orchestrator | Saturday 04 April 2026 00:30:35 +0000 (0:00:00.301) 0:04:04.465 ******** 2026-04-04 00:31:38.314370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:38.314377 | orchestrator | 2026-04-04 00:31:38.314384 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-04 00:31:38.314390 | orchestrator | Saturday 04 April 2026 00:30:35 +0000 (0:00:00.391) 0:04:04.856 ******** 2026-04-04 00:31:38.314397 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.314404 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.314410 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.314417 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.314423 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.314430 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.314436 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.314443 | orchestrator | 2026-04-04 00:31:38.314449 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-04 00:31:38.314456 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:35.905) 0:04:40.761 ******** 2026-04-04 00:31:38.314463 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.314469 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.314476 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.314483 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.314489 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.314496 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.314505 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.314512 | orchestrator | 2026-04-04 00:31:38.314519 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-04 00:31:38.314525 | orchestrator | Saturday 04 April 2026 00:31:20 +0000 (0:00:09.183) 0:04:49.945 ******** 2026-04-04 00:31:38.314552 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.314559 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.314566 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.314572 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.314578 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.314585 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.314591 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.314598 | orchestrator | 2026-04-04 00:31:38.314604 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-04 00:31:38.314611 | orchestrator | Saturday 04 April 2026 00:31:29 +0000 (0:00:08.728) 0:04:58.673 ******** 2026-04-04 00:31:38.314617 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.314624 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.314631 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.314637 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.314644 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.314656 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.314663 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.314669 | orchestrator | 2026-04-04 00:31:38.314676 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-04 00:31:38.314682 | orchestrator | Saturday 04 April 2026 00:31:31 +0000 (0:00:02.096) 0:05:00.770 ******** 2026-04-04 00:31:38.314689 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.314696 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.314702 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.314709 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.314715 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.314722 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.314728 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.314735 | orchestrator | 2026-04-04 00:31:38.314746 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-04 00:31:49.236647 | orchestrator | Saturday 04 April 2026 00:31:38 +0000 (0:00:06.734) 0:05:07.505 ******** 2026-04-04 00:31:49.236765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:49.236781 | orchestrator | 2026-04-04 00:31:49.236791 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-04 00:31:49.236799 | orchestrator | Saturday 04 April 2026 00:31:38 +0000 (0:00:00.398) 0:05:07.903 ******** 2026-04-04 00:31:49.236808 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:49.236817 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:49.236826 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:49.236834 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:49.236841 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:49.236849 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:49.236857 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:49.236865 | orchestrator | 2026-04-04 00:31:49.236874 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-04 00:31:49.236882 | orchestrator | Saturday 04 April 2026 00:31:39 +0000 (0:00:00.726) 0:05:08.629 ******** 2026-04-04 00:31:49.236890 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:49.236899 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:49.236907 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:49.236915 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:49.236923 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:49.236931 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:49.236938 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:49.236946 | orchestrator | 2026-04-04 00:31:49.236954 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-04 00:31:49.236962 | orchestrator | Saturday 04 April 2026 00:31:41 +0000 (0:00:01.881) 0:05:10.511 ******** 2026-04-04 00:31:49.236970 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:49.236978 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:49.236986 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:49.236994 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:49.237002 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:49.237010 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:49.237018 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:49.237025 | orchestrator | 2026-04-04 00:31:49.237033 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-04 00:31:49.237041 | orchestrator | Saturday 04 April 2026 00:31:42 +0000 (0:00:00.771) 0:05:11.282 ******** 2026-04-04 00:31:49.237049 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.237057 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.237065 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.237073 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:49.237081 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:49.237089 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:49.237119 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:49.237129 | orchestrator | 2026-04-04 00:31:49.237139 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-04 00:31:49.237149 | orchestrator | Saturday 04 April 2026 00:31:42 +0000 (0:00:00.248) 0:05:11.531 ******** 2026-04-04 00:31:49.237158 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.237167 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.237181 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.237193 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:49.237206 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:49.237225 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:49.237242 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:49.237255 | orchestrator | 2026-04-04 00:31:49.237268 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-04 00:31:49.237281 | orchestrator | Saturday 04 April 2026 00:31:42 +0000 (0:00:00.370) 0:05:11.901 ******** 2026-04-04 00:31:49.237294 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:49.237307 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:49.237320 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:49.237333 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:49.237346 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:49.237376 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:49.237390 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:49.237404 | orchestrator | 2026-04-04 00:31:49.237417 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-04 00:31:49.237431 | orchestrator | Saturday 04 April 2026 00:31:43 +0000 (0:00:00.347) 0:05:12.249 ******** 2026-04-04 00:31:49.237445 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.237455 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.237467 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.237481 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:49.237494 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:49.237506 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:49.237545 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:49.237560 | orchestrator | 2026-04-04 00:31:49.237573 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-04 00:31:49.237589 | orchestrator | Saturday 04 April 2026 00:31:43 +0000 (0:00:00.240) 0:05:12.490 ******** 2026-04-04 00:31:49.237603 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:49.237615 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:49.237628 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:49.237640 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:49.237648 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:49.237655 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:49.237663 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:49.237671 | orchestrator | 2026-04-04 00:31:49.237679 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-04 00:31:49.237687 | orchestrator | Saturday 04 April 2026 00:31:43 +0000 (0:00:00.286) 0:05:12.776 ******** 2026-04-04 00:31:49.237695 | orchestrator | ok: [testbed-manager] =>  2026-04-04 00:31:49.237703 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237710 | orchestrator | ok: [testbed-node-0] =>  2026-04-04 00:31:49.237718 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237726 | orchestrator | ok: [testbed-node-1] =>  2026-04-04 00:31:49.237734 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237741 | orchestrator | ok: [testbed-node-2] =>  2026-04-04 00:31:49.237749 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237772 | orchestrator | ok: [testbed-node-3] =>  2026-04-04 00:31:49.237781 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237789 | orchestrator | ok: [testbed-node-4] =>  2026-04-04 00:31:49.237796 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237804 | orchestrator | ok: [testbed-node-5] =>  2026-04-04 00:31:49.237812 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:31:49.237820 | orchestrator | 2026-04-04 00:31:49.237837 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-04 00:31:49.237845 | orchestrator | Saturday 04 April 2026 00:31:43 +0000 (0:00:00.246) 0:05:13.022 ******** 2026-04-04 00:31:49.237853 | orchestrator | ok: [testbed-manager] =>  2026-04-04 00:31:49.237860 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237868 | orchestrator | ok: [testbed-node-0] =>  2026-04-04 00:31:49.237876 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237883 | orchestrator | ok: [testbed-node-1] =>  2026-04-04 00:31:49.237891 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237899 | orchestrator | ok: [testbed-node-2] =>  2026-04-04 00:31:49.237907 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237914 | orchestrator | ok: [testbed-node-3] =>  2026-04-04 00:31:49.237922 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237930 | orchestrator | ok: [testbed-node-4] =>  2026-04-04 00:31:49.237937 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237945 | orchestrator | ok: [testbed-node-5] =>  2026-04-04 00:31:49.237953 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:31:49.237960 | orchestrator | 2026-04-04 00:31:49.237968 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-04 00:31:49.237976 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.256) 0:05:13.279 ******** 2026-04-04 00:31:49.237984 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.237992 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.238000 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.238008 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:49.238069 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:49.238080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:49.238088 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:49.238096 | orchestrator | 2026-04-04 00:31:49.238104 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-04 00:31:49.238112 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.233) 0:05:13.512 ******** 2026-04-04 00:31:49.238120 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.238127 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.238135 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.238143 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:49.238151 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:49.238159 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:49.238166 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:49.238174 | orchestrator | 2026-04-04 00:31:49.238182 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-04 00:31:49.238190 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.244) 0:05:13.757 ******** 2026-04-04 00:31:49.238200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:49.238210 | orchestrator | 2026-04-04 00:31:49.238218 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-04 00:31:49.238226 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.394) 0:05:14.151 ******** 2026-04-04 00:31:49.238233 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:49.238241 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:49.238249 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:49.238257 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:49.238265 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:49.238273 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:49.238280 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:49.238288 | orchestrator | 2026-04-04 00:31:49.238296 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-04 00:31:49.238310 | orchestrator | Saturday 04 April 2026 00:31:45 +0000 (0:00:00.866) 0:05:15.018 ******** 2026-04-04 00:31:49.238324 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:49.238332 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:49.238340 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:49.238348 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:49.238355 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:49.238363 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:49.238371 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:49.238379 | orchestrator | 2026-04-04 00:31:49.238387 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-04 00:31:49.238396 | orchestrator | Saturday 04 April 2026 00:31:48 +0000 (0:00:03.029) 0:05:18.047 ******** 2026-04-04 00:31:49.238404 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-04 00:31:49.238413 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-04 00:31:49.238421 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-04 00:31:49.238429 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-04 00:31:49.238437 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-04 00:31:49.238444 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-04 00:31:49.238452 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:49.238460 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-04 00:31:49.238468 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-04 00:31:49.238476 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-04 00:31:49.238484 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:49.238492 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-04 00:31:49.238500 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-04 00:31:49.238545 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-04 00:31:49.238554 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:49.238562 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-04 00:31:49.238577 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-04 00:32:55.375484 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-04 00:32:55.375580 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:55.375589 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-04 00:32:55.375597 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-04 00:32:55.375604 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-04 00:32:55.375611 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:55.375617 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:55.375624 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-04 00:32:55.375630 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-04 00:32:55.375636 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-04 00:32:55.375643 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:55.375649 | orchestrator | 2026-04-04 00:32:55.375657 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-04 00:32:55.375665 | orchestrator | Saturday 04 April 2026 00:31:49 +0000 (0:00:00.628) 0:05:18.676 ******** 2026-04-04 00:32:55.375672 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.375678 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.375685 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.375692 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.375698 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.375705 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.375711 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.375718 | orchestrator | 2026-04-04 00:32:55.375724 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-04 00:32:55.375730 | orchestrator | Saturday 04 April 2026 00:31:57 +0000 (0:00:07.768) 0:05:26.444 ******** 2026-04-04 00:32:55.375736 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.375743 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.375771 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.375778 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.375784 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.375790 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.375797 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.375803 | orchestrator | 2026-04-04 00:32:55.375809 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-04 00:32:55.375817 | orchestrator | Saturday 04 April 2026 00:31:58 +0000 (0:00:01.097) 0:05:27.541 ******** 2026-04-04 00:32:55.375823 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.375829 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.375835 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.375841 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.375848 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.375854 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.375860 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.375865 | orchestrator | 2026-04-04 00:32:55.375872 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-04 00:32:55.375878 | orchestrator | Saturday 04 April 2026 00:32:06 +0000 (0:00:08.603) 0:05:36.145 ******** 2026-04-04 00:32:55.375885 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:55.375891 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.375898 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.375904 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.375910 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.375916 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.375923 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.375928 | orchestrator | 2026-04-04 00:32:55.375935 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-04 00:32:55.375942 | orchestrator | Saturday 04 April 2026 00:32:10 +0000 (0:00:03.426) 0:05:39.571 ******** 2026-04-04 00:32:55.375948 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.375954 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.375960 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.375966 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.375972 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.375978 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.375985 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.375991 | orchestrator | 2026-04-04 00:32:55.376011 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-04 00:32:55.376017 | orchestrator | Saturday 04 April 2026 00:32:11 +0000 (0:00:01.406) 0:05:40.978 ******** 2026-04-04 00:32:55.376024 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.376030 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376036 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376043 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376049 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376055 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376061 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376067 | orchestrator | 2026-04-04 00:32:55.376074 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-04 00:32:55.376080 | orchestrator | Saturday 04 April 2026 00:32:13 +0000 (0:00:01.301) 0:05:42.279 ******** 2026-04-04 00:32:55.376087 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:55.376093 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:55.376099 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:55.376105 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:55.376111 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:55.376118 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:55.376124 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:55.376130 | orchestrator | 2026-04-04 00:32:55.376136 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-04 00:32:55.376142 | orchestrator | Saturday 04 April 2026 00:32:13 +0000 (0:00:00.552) 0:05:42.831 ******** 2026-04-04 00:32:55.376153 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.376160 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376166 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376172 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376178 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376184 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376190 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376197 | orchestrator | 2026-04-04 00:32:55.376203 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-04 00:32:55.376223 | orchestrator | Saturday 04 April 2026 00:32:24 +0000 (0:00:11.148) 0:05:53.979 ******** 2026-04-04 00:32:55.376230 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:55.376236 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376242 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376249 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376255 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376261 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376267 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376274 | orchestrator | 2026-04-04 00:32:55.376280 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-04 00:32:55.376286 | orchestrator | Saturday 04 April 2026 00:32:25 +0000 (0:00:01.204) 0:05:55.184 ******** 2026-04-04 00:32:55.376292 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.376299 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376305 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376311 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376317 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376324 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376330 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376336 | orchestrator | 2026-04-04 00:32:55.376342 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-04 00:32:55.376348 | orchestrator | Saturday 04 April 2026 00:32:36 +0000 (0:00:10.704) 0:06:05.888 ******** 2026-04-04 00:32:55.376354 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.376360 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376366 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376372 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376378 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376385 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376391 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376397 | orchestrator | 2026-04-04 00:32:55.376403 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-04 00:32:55.376425 | orchestrator | Saturday 04 April 2026 00:32:48 +0000 (0:00:11.570) 0:06:17.458 ******** 2026-04-04 00:32:55.376432 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-04 00:32:55.376438 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-04 00:32:55.376444 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-04 00:32:55.376449 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-04 00:32:55.376454 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-04 00:32:55.376460 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-04 00:32:55.376466 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-04 00:32:55.376472 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-04 00:32:55.376478 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-04 00:32:55.376483 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-04 00:32:55.376489 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-04 00:32:55.376496 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-04 00:32:55.376502 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-04 00:32:55.376508 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-04 00:32:55.376518 | orchestrator | 2026-04-04 00:32:55.376525 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-04 00:32:55.376531 | orchestrator | Saturday 04 April 2026 00:32:49 +0000 (0:00:01.184) 0:06:18.643 ******** 2026-04-04 00:32:55.376537 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:55.376544 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:55.376551 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:55.376557 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:55.376563 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:55.376569 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:55.376576 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:55.376581 | orchestrator | 2026-04-04 00:32:55.376587 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-04 00:32:55.376593 | orchestrator | Saturday 04 April 2026 00:32:50 +0000 (0:00:00.629) 0:06:19.272 ******** 2026-04-04 00:32:55.376599 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:55.376605 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:55.376611 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:55.376616 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:55.376622 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:55.376628 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:55.376634 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:55.376640 | orchestrator | 2026-04-04 00:32:55.376646 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-04 00:32:55.376654 | orchestrator | Saturday 04 April 2026 00:32:54 +0000 (0:00:04.550) 0:06:23.822 ******** 2026-04-04 00:32:55.376659 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:55.376665 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:55.376672 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:55.376678 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:55.376684 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:55.376690 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:55.376695 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:55.376701 | orchestrator | 2026-04-04 00:32:55.376708 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-04 00:32:55.376747 | orchestrator | Saturday 04 April 2026 00:32:55 +0000 (0:00:00.471) 0:06:24.294 ******** 2026-04-04 00:32:55.376753 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-04 00:32:55.376759 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-04 00:32:55.376764 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:55.376770 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-04 00:32:55.376775 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-04 00:32:55.376781 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:55.376787 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-04 00:32:55.376793 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-04 00:32:55.376798 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:55.376810 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-04 00:33:14.573930 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-04 00:33:14.574105 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:14.574124 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-04 00:33:14.574137 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-04 00:33:14.574148 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:14.574159 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-04 00:33:14.574169 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-04 00:33:14.574180 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:14.574222 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-04 00:33:14.574235 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-04 00:33:14.574275 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:14.574287 | orchestrator | 2026-04-04 00:33:14.574300 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-04 00:33:14.574312 | orchestrator | Saturday 04 April 2026 00:32:55 +0000 (0:00:00.551) 0:06:24.845 ******** 2026-04-04 00:33:14.574323 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:14.574334 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:14.574344 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:14.574355 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:14.574366 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:14.574426 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:14.574437 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:14.574451 | orchestrator | 2026-04-04 00:33:14.574463 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-04 00:33:14.574476 | orchestrator | Saturday 04 April 2026 00:32:56 +0000 (0:00:00.454) 0:06:25.299 ******** 2026-04-04 00:33:14.574488 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:14.574501 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:14.574513 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:14.574525 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:14.574537 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:14.574550 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:14.574562 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:14.574575 | orchestrator | 2026-04-04 00:33:14.574586 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-04 00:33:14.574598 | orchestrator | Saturday 04 April 2026 00:32:56 +0000 (0:00:00.611) 0:06:25.911 ******** 2026-04-04 00:33:14.574612 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:14.574625 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:14.574636 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:14.574648 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:14.574660 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:14.574673 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:14.574686 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:14.574699 | orchestrator | 2026-04-04 00:33:14.574711 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-04 00:33:14.574724 | orchestrator | Saturday 04 April 2026 00:32:57 +0000 (0:00:00.538) 0:06:26.450 ******** 2026-04-04 00:33:14.574736 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.574748 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.574760 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.574772 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.574784 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.574797 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.574809 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.574819 | orchestrator | 2026-04-04 00:33:14.574830 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-04 00:33:14.574841 | orchestrator | Saturday 04 April 2026 00:32:59 +0000 (0:00:01.803) 0:06:28.253 ******** 2026-04-04 00:33:14.574868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:14.574882 | orchestrator | 2026-04-04 00:33:14.574893 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-04 00:33:14.574904 | orchestrator | Saturday 04 April 2026 00:32:59 +0000 (0:00:00.795) 0:06:29.049 ******** 2026-04-04 00:33:14.574914 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.574925 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:14.574936 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:14.574947 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:14.574958 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:14.574976 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:14.574987 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:14.574998 | orchestrator | 2026-04-04 00:33:14.575009 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-04 00:33:14.575019 | orchestrator | Saturday 04 April 2026 00:33:00 +0000 (0:00:01.018) 0:06:30.067 ******** 2026-04-04 00:33:14.575030 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575041 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:14.575051 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:14.575062 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:14.575072 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:14.575083 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:14.575093 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:14.575104 | orchestrator | 2026-04-04 00:33:14.575114 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-04 00:33:14.575125 | orchestrator | Saturday 04 April 2026 00:33:01 +0000 (0:00:00.844) 0:06:30.911 ******** 2026-04-04 00:33:14.575136 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575146 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:14.575157 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:14.575168 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:14.575178 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:14.575189 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:14.575200 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:14.575210 | orchestrator | 2026-04-04 00:33:14.575221 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-04 00:33:14.575251 | orchestrator | Saturday 04 April 2026 00:33:03 +0000 (0:00:01.352) 0:06:32.264 ******** 2026-04-04 00:33:14.575263 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:14.575274 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.575284 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.575295 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.575306 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.575316 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.575327 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.575338 | orchestrator | 2026-04-04 00:33:14.575349 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-04 00:33:14.575359 | orchestrator | Saturday 04 April 2026 00:33:04 +0000 (0:00:01.535) 0:06:33.799 ******** 2026-04-04 00:33:14.575390 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575402 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:14.575412 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:14.575423 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:14.575433 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:14.575444 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:14.575455 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:14.575465 | orchestrator | 2026-04-04 00:33:14.575476 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-04 00:33:14.575487 | orchestrator | Saturday 04 April 2026 00:33:05 +0000 (0:00:01.314) 0:06:35.114 ******** 2026-04-04 00:33:14.575497 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:14.575508 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:14.575518 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:14.575529 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:14.575540 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:14.575550 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:14.575561 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:14.575571 | orchestrator | 2026-04-04 00:33:14.575582 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-04 00:33:14.575593 | orchestrator | Saturday 04 April 2026 00:33:07 +0000 (0:00:01.528) 0:06:36.642 ******** 2026-04-04 00:33:14.575604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:14.575630 | orchestrator | 2026-04-04 00:33:14.575641 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-04 00:33:14.575652 | orchestrator | Saturday 04 April 2026 00:33:08 +0000 (0:00:00.852) 0:06:37.495 ******** 2026-04-04 00:33:14.575663 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575673 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.575684 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.575695 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.575705 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.575716 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.575726 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.575737 | orchestrator | 2026-04-04 00:33:14.575748 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-04 00:33:14.575759 | orchestrator | Saturday 04 April 2026 00:33:09 +0000 (0:00:01.498) 0:06:38.994 ******** 2026-04-04 00:33:14.575769 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575780 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.575790 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.575801 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.575812 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.575822 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.575833 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.575843 | orchestrator | 2026-04-04 00:33:14.575854 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-04 00:33:14.575865 | orchestrator | Saturday 04 April 2026 00:33:11 +0000 (0:00:01.467) 0:06:40.461 ******** 2026-04-04 00:33:14.575876 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575886 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.575897 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.575908 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.575918 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.575929 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.575939 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.575950 | orchestrator | 2026-04-04 00:33:14.575962 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-04 00:33:14.575973 | orchestrator | Saturday 04 April 2026 00:33:12 +0000 (0:00:01.079) 0:06:41.541 ******** 2026-04-04 00:33:14.575983 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:14.575994 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:14.576004 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:14.576015 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:14.576025 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:14.576036 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:14.576046 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:14.576057 | orchestrator | 2026-04-04 00:33:14.576068 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-04 00:33:14.576078 | orchestrator | Saturday 04 April 2026 00:33:13 +0000 (0:00:01.140) 0:06:42.682 ******** 2026-04-04 00:33:14.576089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:14.576100 | orchestrator | 2026-04-04 00:33:14.576111 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:14.576122 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.817) 0:06:43.499 ******** 2026-04-04 00:33:14.576132 | orchestrator | 2026-04-04 00:33:14.576143 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:14.576154 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.038) 0:06:43.538 ******** 2026-04-04 00:33:14.576164 | orchestrator | 2026-04-04 00:33:14.576175 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:14.576186 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.189) 0:06:43.728 ******** 2026-04-04 00:33:14.576196 | orchestrator | 2026-04-04 00:33:14.576214 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:14.576232 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.040) 0:06:43.768 ******** 2026-04-04 00:33:41.512754 | orchestrator | 2026-04-04 00:33:41.512848 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:41.512860 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.039) 0:06:43.807 ******** 2026-04-04 00:33:41.512868 | orchestrator | 2026-04-04 00:33:41.512875 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:41.512882 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.044) 0:06:43.852 ******** 2026-04-04 00:33:41.512889 | orchestrator | 2026-04-04 00:33:41.512896 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:33:41.512903 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.039) 0:06:43.891 ******** 2026-04-04 00:33:41.512910 | orchestrator | 2026-04-04 00:33:41.512917 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:33:41.512923 | orchestrator | Saturday 04 April 2026 00:33:14 +0000 (0:00:00.044) 0:06:43.935 ******** 2026-04-04 00:33:41.512930 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:41.512939 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:41.512950 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:41.512965 | orchestrator | 2026-04-04 00:33:41.512983 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-04 00:33:41.512994 | orchestrator | Saturday 04 April 2026 00:33:15 +0000 (0:00:01.247) 0:06:45.183 ******** 2026-04-04 00:33:41.513004 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:41.513017 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:41.513028 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:41.513038 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:41.513049 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:41.513060 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:41.513072 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:41.513083 | orchestrator | 2026-04-04 00:33:41.513094 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-04 00:33:41.513106 | orchestrator | Saturday 04 April 2026 00:33:17 +0000 (0:00:01.297) 0:06:46.481 ******** 2026-04-04 00:33:41.513117 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:41.513130 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:41.513142 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:41.513154 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:41.513167 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:41.513176 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:41.513182 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:41.513189 | orchestrator | 2026-04-04 00:33:41.513196 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-04 00:33:41.513203 | orchestrator | Saturday 04 April 2026 00:33:18 +0000 (0:00:01.165) 0:06:47.646 ******** 2026-04-04 00:33:41.513210 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:41.513216 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:41.513223 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:41.513230 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:41.513236 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:41.513243 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:41.513250 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:41.513256 | orchestrator | 2026-04-04 00:33:41.513263 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-04 00:33:41.513270 | orchestrator | Saturday 04 April 2026 00:33:20 +0000 (0:00:02.345) 0:06:49.991 ******** 2026-04-04 00:33:41.513276 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:41.513283 | orchestrator | 2026-04-04 00:33:41.513293 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-04 00:33:41.513307 | orchestrator | Saturday 04 April 2026 00:33:20 +0000 (0:00:00.107) 0:06:50.099 ******** 2026-04-04 00:33:41.513374 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.513386 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:41.513397 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:41.513409 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:41.513420 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:41.513431 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:41.513442 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:41.513452 | orchestrator | 2026-04-04 00:33:41.513478 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-04 00:33:41.513491 | orchestrator | Saturday 04 April 2026 00:33:22 +0000 (0:00:01.160) 0:06:51.259 ******** 2026-04-04 00:33:41.513502 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:41.513514 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:41.513524 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:41.513536 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:41.513547 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:41.513558 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:41.513568 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:41.513578 | orchestrator | 2026-04-04 00:33:41.513589 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-04 00:33:41.513602 | orchestrator | Saturday 04 April 2026 00:33:22 +0000 (0:00:00.511) 0:06:51.771 ******** 2026-04-04 00:33:41.513614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:41.513627 | orchestrator | 2026-04-04 00:33:41.513639 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-04 00:33:41.513651 | orchestrator | Saturday 04 April 2026 00:33:23 +0000 (0:00:00.830) 0:06:52.601 ******** 2026-04-04 00:33:41.513666 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.513678 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:41.513690 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:41.513702 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:41.513715 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:41.513727 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:41.513740 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:41.513752 | orchestrator | 2026-04-04 00:33:41.513764 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-04 00:33:41.513777 | orchestrator | Saturday 04 April 2026 00:33:24 +0000 (0:00:01.006) 0:06:53.607 ******** 2026-04-04 00:33:41.513791 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-04 00:33:41.513823 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-04 00:33:41.513834 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-04 00:33:41.513844 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-04 00:33:41.513856 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-04 00:33:41.513868 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-04 00:33:41.513878 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-04 00:33:41.513890 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-04 00:33:41.513900 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-04 00:33:41.513911 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-04 00:33:41.513922 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-04 00:33:41.513934 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-04 00:33:41.513946 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-04 00:33:41.513958 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-04 00:33:41.513971 | orchestrator | 2026-04-04 00:33:41.513983 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-04 00:33:41.513996 | orchestrator | Saturday 04 April 2026 00:33:26 +0000 (0:00:02.449) 0:06:56.057 ******** 2026-04-04 00:33:41.514076 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:41.514090 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:41.514102 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:41.514115 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:41.514128 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:41.514140 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:41.514152 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:41.514164 | orchestrator | 2026-04-04 00:33:41.514176 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-04 00:33:41.514189 | orchestrator | Saturday 04 April 2026 00:33:27 +0000 (0:00:00.475) 0:06:56.532 ******** 2026-04-04 00:33:41.514203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:41.514217 | orchestrator | 2026-04-04 00:33:41.514230 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-04 00:33:41.514242 | orchestrator | Saturday 04 April 2026 00:33:28 +0000 (0:00:00.938) 0:06:57.471 ******** 2026-04-04 00:33:41.514254 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.514266 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:41.514278 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:41.514290 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:41.514303 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:41.514315 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:41.514327 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:41.514338 | orchestrator | 2026-04-04 00:33:41.514420 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-04 00:33:41.514433 | orchestrator | Saturday 04 April 2026 00:33:29 +0000 (0:00:00.856) 0:06:58.328 ******** 2026-04-04 00:33:41.514445 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.514456 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:41.514468 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:41.514481 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:41.514493 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:41.514504 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:41.514517 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:41.514529 | orchestrator | 2026-04-04 00:33:41.514541 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-04 00:33:41.514554 | orchestrator | Saturday 04 April 2026 00:33:29 +0000 (0:00:00.801) 0:06:59.129 ******** 2026-04-04 00:33:41.514567 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:41.514579 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:41.514600 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:41.514613 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:41.514626 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:41.514638 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:41.514650 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:41.514662 | orchestrator | 2026-04-04 00:33:41.514675 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-04 00:33:41.514688 | orchestrator | Saturday 04 April 2026 00:33:30 +0000 (0:00:00.511) 0:06:59.640 ******** 2026-04-04 00:33:41.514700 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.514713 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:41.514725 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:41.514738 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:41.514751 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:41.514763 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:41.514776 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:41.514788 | orchestrator | 2026-04-04 00:33:41.514802 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-04 00:33:41.514815 | orchestrator | Saturday 04 April 2026 00:33:32 +0000 (0:00:01.682) 0:07:01.323 ******** 2026-04-04 00:33:41.514827 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:41.514849 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:41.514861 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:41.514872 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:41.514885 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:41.514896 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:41.514909 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:41.514922 | orchestrator | 2026-04-04 00:33:41.514935 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-04 00:33:41.514947 | orchestrator | Saturday 04 April 2026 00:33:32 +0000 (0:00:00.618) 0:07:01.942 ******** 2026-04-04 00:33:41.514960 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:41.514973 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:41.514985 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:41.514998 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:41.515010 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:41.515023 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:41.515046 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:13.638939 | orchestrator | 2026-04-04 00:34:13.639052 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-04 00:34:13.639070 | orchestrator | Saturday 04 April 2026 00:33:41 +0000 (0:00:08.828) 0:07:10.770 ******** 2026-04-04 00:34:13.639082 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639094 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:13.639105 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:13.639114 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:13.639124 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:13.639134 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:13.639144 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:13.639154 | orchestrator | 2026-04-04 00:34:13.639164 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-04 00:34:13.639174 | orchestrator | Saturday 04 April 2026 00:33:42 +0000 (0:00:01.399) 0:07:12.170 ******** 2026-04-04 00:34:13.639184 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:13.639194 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:13.639203 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:13.639213 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:13.639223 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:13.639233 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:13.639242 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639252 | orchestrator | 2026-04-04 00:34:13.639262 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-04 00:34:13.639272 | orchestrator | Saturday 04 April 2026 00:33:44 +0000 (0:00:01.973) 0:07:14.144 ******** 2026-04-04 00:34:13.639282 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639350 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:13.639361 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:13.639371 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:13.639381 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:13.639390 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:13.639400 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:13.639409 | orchestrator | 2026-04-04 00:34:13.639419 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:34:13.639429 | orchestrator | Saturday 04 April 2026 00:33:46 +0000 (0:00:01.616) 0:07:15.760 ******** 2026-04-04 00:34:13.639439 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639448 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.639458 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.639470 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.639481 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.639492 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.639503 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.639515 | orchestrator | 2026-04-04 00:34:13.639526 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:34:13.639564 | orchestrator | Saturday 04 April 2026 00:33:47 +0000 (0:00:00.784) 0:07:16.544 ******** 2026-04-04 00:34:13.639576 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:13.639587 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:13.639598 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:13.639609 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:13.639620 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:13.639632 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:13.639643 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:13.639654 | orchestrator | 2026-04-04 00:34:13.639665 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-04 00:34:13.639677 | orchestrator | Saturday 04 April 2026 00:33:48 +0000 (0:00:00.684) 0:07:17.229 ******** 2026-04-04 00:34:13.639688 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:13.639699 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:13.639710 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:13.639721 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:13.639733 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:13.639744 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:13.639756 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:13.639767 | orchestrator | 2026-04-04 00:34:13.639778 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-04 00:34:13.639789 | orchestrator | Saturday 04 April 2026 00:33:48 +0000 (0:00:00.513) 0:07:17.742 ******** 2026-04-04 00:34:13.639801 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639812 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.639824 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.639833 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.639843 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.639853 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.639863 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.639872 | orchestrator | 2026-04-04 00:34:13.639882 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-04 00:34:13.639892 | orchestrator | Saturday 04 April 2026 00:33:48 +0000 (0:00:00.418) 0:07:18.161 ******** 2026-04-04 00:34:13.639902 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.639912 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.639922 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.639931 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.639941 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.639950 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.639960 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.639970 | orchestrator | 2026-04-04 00:34:13.639980 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-04 00:34:13.639989 | orchestrator | Saturday 04 April 2026 00:33:49 +0000 (0:00:00.422) 0:07:18.583 ******** 2026-04-04 00:34:13.639999 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.640009 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.640018 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.640028 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.640038 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.640047 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.640057 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.640067 | orchestrator | 2026-04-04 00:34:13.640077 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-04 00:34:13.640087 | orchestrator | Saturday 04 April 2026 00:33:49 +0000 (0:00:00.415) 0:07:18.999 ******** 2026-04-04 00:34:13.640096 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.640106 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.640116 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.640125 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.640135 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.640145 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.640154 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.640164 | orchestrator | 2026-04-04 00:34:13.640191 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-04 00:34:13.640227 | orchestrator | Saturday 04 April 2026 00:33:55 +0000 (0:00:05.215) 0:07:24.214 ******** 2026-04-04 00:34:13.640237 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:13.640247 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:13.640257 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:13.640266 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:13.640276 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:13.640286 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:13.640320 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:13.640330 | orchestrator | 2026-04-04 00:34:13.640349 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-04 00:34:13.640360 | orchestrator | Saturday 04 April 2026 00:33:55 +0000 (0:00:00.672) 0:07:24.887 ******** 2026-04-04 00:34:13.640371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:13.640382 | orchestrator | 2026-04-04 00:34:13.640392 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-04 00:34:13.640402 | orchestrator | Saturday 04 April 2026 00:33:56 +0000 (0:00:00.773) 0:07:25.661 ******** 2026-04-04 00:34:13.640411 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.640421 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.640431 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.640440 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.640449 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.640482 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.640492 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.640501 | orchestrator | 2026-04-04 00:34:13.640511 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-04 00:34:13.640521 | orchestrator | Saturday 04 April 2026 00:33:58 +0000 (0:00:01.982) 0:07:27.644 ******** 2026-04-04 00:34:13.640530 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.640540 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.640549 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.640559 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.640568 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.640578 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.640587 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.640597 | orchestrator | 2026-04-04 00:34:13.640606 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-04 00:34:13.640616 | orchestrator | Saturday 04 April 2026 00:33:59 +0000 (0:00:01.230) 0:07:28.874 ******** 2026-04-04 00:34:13.640626 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:13.640635 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:13.640645 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:13.640654 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:13.640663 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:13.640673 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.640682 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.640692 | orchestrator | 2026-04-04 00:34:13.640702 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-04 00:34:13.640711 | orchestrator | Saturday 04 April 2026 00:34:00 +0000 (0:00:00.821) 0:07:29.696 ******** 2026-04-04 00:34:13.640721 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640733 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640743 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640757 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640774 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640784 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640794 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:34:13.640803 | orchestrator | 2026-04-04 00:34:13.640813 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-04 00:34:13.640823 | orchestrator | Saturday 04 April 2026 00:34:02 +0000 (0:00:01.750) 0:07:31.446 ******** 2026-04-04 00:34:13.640833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:13.640843 | orchestrator | 2026-04-04 00:34:13.640853 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-04 00:34:13.640862 | orchestrator | Saturday 04 April 2026 00:34:03 +0000 (0:00:00.794) 0:07:32.241 ******** 2026-04-04 00:34:13.640872 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:13.640882 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:13.640891 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:13.640901 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:13.640911 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:13.640921 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:13.640930 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:13.640940 | orchestrator | 2026-04-04 00:34:13.640957 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-04 00:34:42.834679 | orchestrator | Saturday 04 April 2026 00:34:13 +0000 (0:00:10.591) 0:07:42.832 ******** 2026-04-04 00:34:42.834786 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:42.834802 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:42.834813 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:42.834824 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:42.834835 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:42.834846 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:42.834858 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:42.834869 | orchestrator | 2026-04-04 00:34:42.834881 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-04 00:34:42.834892 | orchestrator | Saturday 04 April 2026 00:34:15 +0000 (0:00:01.600) 0:07:44.433 ******** 2026-04-04 00:34:42.834903 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:42.834914 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:42.834925 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:42.834937 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:42.834948 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:42.834959 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:42.834970 | orchestrator | 2026-04-04 00:34:42.834981 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-04 00:34:42.835011 | orchestrator | Saturday 04 April 2026 00:34:16 +0000 (0:00:01.319) 0:07:45.753 ******** 2026-04-04 00:34:42.835023 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.835035 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.835046 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.835057 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.835068 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.835079 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.835090 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.835101 | orchestrator | 2026-04-04 00:34:42.835112 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-04 00:34:42.835123 | orchestrator | 2026-04-04 00:34:42.835134 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-04 00:34:42.835171 | orchestrator | Saturday 04 April 2026 00:34:17 +0000 (0:00:01.164) 0:07:46.917 ******** 2026-04-04 00:34:42.835183 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:42.835194 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:42.835208 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:42.835220 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:42.835233 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:42.835276 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:42.835291 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:42.835309 | orchestrator | 2026-04-04 00:34:42.835328 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-04 00:34:42.835346 | orchestrator | 2026-04-04 00:34:42.835364 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-04 00:34:42.835382 | orchestrator | Saturday 04 April 2026 00:34:18 +0000 (0:00:00.445) 0:07:47.363 ******** 2026-04-04 00:34:42.835401 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.835419 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.835440 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.835458 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.835477 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.835497 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.835516 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.835534 | orchestrator | 2026-04-04 00:34:42.835553 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-04 00:34:42.835567 | orchestrator | Saturday 04 April 2026 00:34:19 +0000 (0:00:01.265) 0:07:48.629 ******** 2026-04-04 00:34:42.835577 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:42.835588 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:42.835599 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:42.835610 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:42.835620 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:42.835631 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:42.835642 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:42.835652 | orchestrator | 2026-04-04 00:34:42.835663 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-04 00:34:42.835675 | orchestrator | Saturday 04 April 2026 00:34:20 +0000 (0:00:01.407) 0:07:50.036 ******** 2026-04-04 00:34:42.835701 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:42.835712 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:42.835723 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:42.835733 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:42.835744 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:42.835755 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:42.835765 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:42.835776 | orchestrator | 2026-04-04 00:34:42.835787 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-04 00:34:42.835798 | orchestrator | Saturday 04 April 2026 00:34:21 +0000 (0:00:00.415) 0:07:50.451 ******** 2026-04-04 00:34:42.835809 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:42.835821 | orchestrator | 2026-04-04 00:34:42.835832 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-04 00:34:42.835842 | orchestrator | Saturday 04 April 2026 00:34:21 +0000 (0:00:00.736) 0:07:51.188 ******** 2026-04-04 00:34:42.835854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:42.835867 | orchestrator | 2026-04-04 00:34:42.835878 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-04 00:34:42.835889 | orchestrator | Saturday 04 April 2026 00:34:22 +0000 (0:00:00.939) 0:07:52.128 ******** 2026-04-04 00:34:42.835954 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.835967 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.835978 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.835988 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.835999 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836010 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836020 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836031 | orchestrator | 2026-04-04 00:34:42.836061 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-04 00:34:42.836073 | orchestrator | Saturday 04 April 2026 00:34:32 +0000 (0:00:09.334) 0:08:01.462 ******** 2026-04-04 00:34:42.836084 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836095 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836105 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836116 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836127 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836138 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836148 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836159 | orchestrator | 2026-04-04 00:34:42.836170 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-04 00:34:42.836181 | orchestrator | Saturday 04 April 2026 00:34:32 +0000 (0:00:00.740) 0:08:02.203 ******** 2026-04-04 00:34:42.836192 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836203 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836213 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836224 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836234 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836245 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836277 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836288 | orchestrator | 2026-04-04 00:34:42.836299 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-04 00:34:42.836310 | orchestrator | Saturday 04 April 2026 00:34:34 +0000 (0:00:01.263) 0:08:03.466 ******** 2026-04-04 00:34:42.836321 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836332 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836342 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836353 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836364 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836374 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836385 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836396 | orchestrator | 2026-04-04 00:34:42.836406 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-04 00:34:42.836417 | orchestrator | Saturday 04 April 2026 00:34:35 +0000 (0:00:01.703) 0:08:05.169 ******** 2026-04-04 00:34:42.836428 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836439 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836450 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836460 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836471 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836482 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836492 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836503 | orchestrator | 2026-04-04 00:34:42.836514 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-04 00:34:42.836525 | orchestrator | Saturday 04 April 2026 00:34:37 +0000 (0:00:01.203) 0:08:06.373 ******** 2026-04-04 00:34:42.836536 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836547 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836557 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836568 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836579 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836589 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.836600 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.836611 | orchestrator | 2026-04-04 00:34:42.836622 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-04 00:34:42.836639 | orchestrator | 2026-04-04 00:34:42.836650 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-04 00:34:42.836661 | orchestrator | Saturday 04 April 2026 00:34:38 +0000 (0:00:01.021) 0:08:07.394 ******** 2026-04-04 00:34:42.836672 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:42.836683 | orchestrator | 2026-04-04 00:34:42.836694 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-04 00:34:42.836705 | orchestrator | Saturday 04 April 2026 00:34:39 +0000 (0:00:00.943) 0:08:08.338 ******** 2026-04-04 00:34:42.836721 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:42.836793 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:42.836806 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:42.836817 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:42.836828 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:42.836838 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:42.836849 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:42.836859 | orchestrator | 2026-04-04 00:34:42.836870 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-04 00:34:42.836916 | orchestrator | Saturday 04 April 2026 00:34:39 +0000 (0:00:00.825) 0:08:09.163 ******** 2026-04-04 00:34:42.836933 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:42.836951 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:42.836963 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:42.836974 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:42.836985 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:42.836995 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:42.837006 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:42.837016 | orchestrator | 2026-04-04 00:34:42.837027 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-04 00:34:42.837038 | orchestrator | Saturday 04 April 2026 00:34:41 +0000 (0:00:01.248) 0:08:10.412 ******** 2026-04-04 00:34:42.837049 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:42.837060 | orchestrator | 2026-04-04 00:34:42.837070 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-04 00:34:42.837081 | orchestrator | Saturday 04 April 2026 00:34:42 +0000 (0:00:00.798) 0:08:11.211 ******** 2026-04-04 00:34:42.837092 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:42.837102 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:42.837113 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:42.837123 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:42.837134 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:42.837144 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:42.837155 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:42.837166 | orchestrator | 2026-04-04 00:34:42.837185 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-04 00:34:44.346648 | orchestrator | Saturday 04 April 2026 00:34:42 +0000 (0:00:00.817) 0:08:12.028 ******** 2026-04-04 00:34:44.346748 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:44.346765 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:34:44.346777 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:34:44.346788 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:34:44.346799 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:34:44.346810 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:34:44.346821 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:34:44.346832 | orchestrator | 2026-04-04 00:34:44.346844 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:34:44.346856 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-04 00:34:44.346868 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:34:44.346909 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-04 00:34:44.346921 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-04 00:34:44.346932 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:34:44.346943 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:34:44.346962 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:34:44.346981 | orchestrator | 2026-04-04 00:34:44.346999 | orchestrator | 2026-04-04 00:34:44.347017 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:34:44.347037 | orchestrator | Saturday 04 April 2026 00:34:44 +0000 (0:00:01.217) 0:08:13.246 ******** 2026-04-04 00:34:44.347058 | orchestrator | =============================================================================== 2026-04-04 00:34:44.347080 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.08s 2026-04-04 00:34:44.347102 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.91s 2026-04-04 00:34:44.347122 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.78s 2026-04-04 00:34:44.347142 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.33s 2026-04-04 00:34:44.347163 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.57s 2026-04-04 00:34:44.347183 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.15s 2026-04-04 00:34:44.347203 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.89s 2026-04-04 00:34:44.347224 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.70s 2026-04-04 00:34:44.347287 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.59s 2026-04-04 00:34:44.347310 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.13s 2026-04-04 00:34:44.347330 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.54s 2026-04-04 00:34:44.347369 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.33s 2026-04-04 00:34:44.347390 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.18s 2026-04-04 00:34:44.347410 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.83s 2026-04-04 00:34:44.347430 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.73s 2026-04-04 00:34:44.347449 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.60s 2026-04-04 00:34:44.347469 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.77s 2026-04-04 00:34:44.347489 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.08s 2026-04-04 00:34:44.347510 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.73s 2026-04-04 00:34:44.347530 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.22s 2026-04-04 00:34:44.517375 | orchestrator | + osism apply fail2ban 2026-04-04 00:34:56.154334 | orchestrator | 2026-04-04 00:34:56 | INFO  | Prepare task for execution of fail2ban. 2026-04-04 00:34:56.240367 | orchestrator | 2026-04-04 00:34:56 | INFO  | Task 4b619dd7-5f0d-43fc-8d72-69e473a914a4 (fail2ban) was prepared for execution. 2026-04-04 00:34:56.240446 | orchestrator | 2026-04-04 00:34:56 | INFO  | It takes a moment until task 4b619dd7-5f0d-43fc-8d72-69e473a914a4 (fail2ban) has been started and output is visible here. 2026-04-04 00:35:17.572855 | orchestrator | 2026-04-04 00:35:17.572971 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-04 00:35:17.572989 | orchestrator | 2026-04-04 00:35:17.573001 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-04 00:35:17.573013 | orchestrator | Saturday 04 April 2026 00:34:59 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-04-04 00:35:17.573026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:35:17.573039 | orchestrator | 2026-04-04 00:35:17.573051 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-04 00:35:17.573062 | orchestrator | Saturday 04 April 2026 00:35:00 +0000 (0:00:01.104) 0:00:01.422 ******** 2026-04-04 00:35:17.573073 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:17.573085 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:17.573096 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:17.573107 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:17.573117 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:17.573128 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:17.573139 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:17.573150 | orchestrator | 2026-04-04 00:35:17.573161 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-04 00:35:17.573173 | orchestrator | Saturday 04 April 2026 00:35:12 +0000 (0:00:11.989) 0:00:13.412 ******** 2026-04-04 00:35:17.573184 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:17.573195 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:17.573206 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:17.573289 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:17.573302 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:17.573313 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:17.573324 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:17.573335 | orchestrator | 2026-04-04 00:35:17.573346 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-04 00:35:17.573358 | orchestrator | Saturday 04 April 2026 00:35:14 +0000 (0:00:01.629) 0:00:15.041 ******** 2026-04-04 00:35:17.573369 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:17.573381 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:17.573394 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:17.573407 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:17.573420 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:17.573433 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:17.573445 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:17.573457 | orchestrator | 2026-04-04 00:35:17.573470 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-04 00:35:17.573483 | orchestrator | Saturday 04 April 2026 00:35:15 +0000 (0:00:01.253) 0:00:16.295 ******** 2026-04-04 00:35:17.573496 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:17.573509 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:17.573522 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:17.573535 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:17.573547 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:17.573560 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:17.573571 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:17.573582 | orchestrator | 2026-04-04 00:35:17.573593 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:35:17.573605 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573617 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573654 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573666 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573703 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573722 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573742 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:35:17.573761 | orchestrator | 2026-04-04 00:35:17.573780 | orchestrator | 2026-04-04 00:35:17.573799 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:35:17.573812 | orchestrator | Saturday 04 April 2026 00:35:17 +0000 (0:00:01.612) 0:00:17.907 ******** 2026-04-04 00:35:17.573823 | orchestrator | =============================================================================== 2026-04-04 00:35:17.573834 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.99s 2026-04-04 00:35:17.573845 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.63s 2026-04-04 00:35:17.573856 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-04-04 00:35:17.573867 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2026-04-04 00:35:17.573877 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-04-04 00:35:17.747505 | orchestrator | + osism apply network 2026-04-04 00:35:29.006424 | orchestrator | 2026-04-04 00:35:29 | INFO  | Prepare task for execution of network. 2026-04-04 00:35:29.088539 | orchestrator | 2026-04-04 00:35:29 | INFO  | Task 138e589d-b7dd-4f8b-afb4-2ee83805ada7 (network) was prepared for execution. 2026-04-04 00:35:29.088611 | orchestrator | 2026-04-04 00:35:29 | INFO  | It takes a moment until task 138e589d-b7dd-4f8b-afb4-2ee83805ada7 (network) has been started and output is visible here. 2026-04-04 00:35:57.988984 | orchestrator | 2026-04-04 00:35:57.989103 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-04 00:35:57.989126 | orchestrator | 2026-04-04 00:35:57.989144 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-04 00:35:57.989160 | orchestrator | Saturday 04 April 2026 00:35:32 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-04-04 00:35:57.989270 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.989288 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.989303 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.989317 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.989332 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.989347 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.989362 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.989378 | orchestrator | 2026-04-04 00:35:57.989392 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-04 00:35:57.989407 | orchestrator | Saturday 04 April 2026 00:35:32 +0000 (0:00:00.593) 0:00:00.929 ******** 2026-04-04 00:35:57.989423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:35:57.989440 | orchestrator | 2026-04-04 00:35:57.989456 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-04 00:35:57.989472 | orchestrator | Saturday 04 April 2026 00:35:34 +0000 (0:00:01.132) 0:00:02.062 ******** 2026-04-04 00:35:57.989488 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.989503 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.989548 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.989611 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.989626 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.989641 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.989657 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.989672 | orchestrator | 2026-04-04 00:35:57.989687 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-04 00:35:57.989702 | orchestrator | Saturday 04 April 2026 00:35:36 +0000 (0:00:02.561) 0:00:04.623 ******** 2026-04-04 00:35:57.989716 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.989731 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.989745 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.989760 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.989773 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.989787 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.989802 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.989817 | orchestrator | 2026-04-04 00:35:57.989832 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-04 00:35:57.989846 | orchestrator | Saturday 04 April 2026 00:35:38 +0000 (0:00:01.575) 0:00:06.199 ******** 2026-04-04 00:35:57.989861 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-04 00:35:57.989877 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-04 00:35:57.989891 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-04 00:35:57.989905 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-04 00:35:57.989919 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-04 00:35:57.989933 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-04 00:35:57.989985 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-04 00:35:57.990003 | orchestrator | 2026-04-04 00:35:57.990080 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-04 00:35:57.990104 | orchestrator | Saturday 04 April 2026 00:35:39 +0000 (0:00:01.122) 0:00:07.321 ******** 2026-04-04 00:35:57.990119 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:35:57.990225 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:57.990247 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:57.990262 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:57.990277 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:57.990292 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:57.990307 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:57.990322 | orchestrator | 2026-04-04 00:35:57.990337 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-04 00:35:57.990352 | orchestrator | Saturday 04 April 2026 00:35:39 +0000 (0:00:00.629) 0:00:07.951 ******** 2026-04-04 00:35:57.990366 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:35:57.990379 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:57.990390 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:57.990402 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:57.990416 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:57.990429 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:57.990443 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:57.990457 | orchestrator | 2026-04-04 00:35:57.990470 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-04 00:35:57.990484 | orchestrator | Saturday 04 April 2026 00:35:40 +0000 (0:00:00.754) 0:00:08.706 ******** 2026-04-04 00:35:57.990497 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:35:57.990511 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:57.990545 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:57.990560 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:57.990573 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:57.990586 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:57.990600 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:57.990614 | orchestrator | 2026-04-04 00:35:57.990642 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-04 00:35:57.990656 | orchestrator | Saturday 04 April 2026 00:35:41 +0000 (0:00:00.761) 0:00:09.467 ******** 2026-04-04 00:35:57.990670 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:35:57.990683 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 00:35:57.990695 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:35:57.990708 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 00:35:57.990722 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 00:35:57.990737 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 00:35:57.990752 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 00:35:57.990766 | orchestrator | 2026-04-04 00:35:57.990807 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-04 00:35:57.990824 | orchestrator | Saturday 04 April 2026 00:35:44 +0000 (0:00:03.243) 0:00:12.711 ******** 2026-04-04 00:35:57.990838 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:57.990901 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:57.990918 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:57.990933 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:57.990947 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:57.990961 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:57.990976 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:57.990991 | orchestrator | 2026-04-04 00:35:57.991006 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-04 00:35:57.991020 | orchestrator | Saturday 04 April 2026 00:35:46 +0000 (0:00:01.625) 0:00:14.337 ******** 2026-04-04 00:35:57.991035 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:35:57.991049 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:35:57.991063 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 00:35:57.991078 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 00:35:57.991093 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 00:35:57.991107 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 00:35:57.991121 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 00:35:57.991136 | orchestrator | 2026-04-04 00:35:57.991150 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-04 00:35:57.991192 | orchestrator | Saturday 04 April 2026 00:35:48 +0000 (0:00:01.734) 0:00:16.072 ******** 2026-04-04 00:35:57.991208 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.991224 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.991238 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.991253 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.991268 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.991282 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.991296 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.991309 | orchestrator | 2026-04-04 00:35:57.991323 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-04 00:35:57.991337 | orchestrator | Saturday 04 April 2026 00:35:49 +0000 (0:00:01.084) 0:00:17.156 ******** 2026-04-04 00:35:57.991350 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:35:57.991363 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:57.991376 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:57.991391 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:57.991405 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:57.991419 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:57.991434 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:57.991449 | orchestrator | 2026-04-04 00:35:57.991464 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-04 00:35:57.991530 | orchestrator | Saturday 04 April 2026 00:35:49 +0000 (0:00:00.611) 0:00:17.767 ******** 2026-04-04 00:35:57.991545 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.991558 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.991568 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.991576 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.991600 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.991614 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.991629 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.991644 | orchestrator | 2026-04-04 00:35:57.991658 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-04 00:35:57.991672 | orchestrator | Saturday 04 April 2026 00:35:52 +0000 (0:00:02.289) 0:00:20.057 ******** 2026-04-04 00:35:57.991686 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:57.991700 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:57.991714 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:57.991728 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:57.991743 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:57.991757 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:57.991772 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-04 00:35:57.991788 | orchestrator | 2026-04-04 00:35:57.991802 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-04 00:35:57.991826 | orchestrator | Saturday 04 April 2026 00:35:52 +0000 (0:00:00.863) 0:00:20.921 ******** 2026-04-04 00:35:57.991841 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.991856 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:57.991870 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:57.991884 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:57.991900 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:57.991913 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:57.991927 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:57.991941 | orchestrator | 2026-04-04 00:35:57.991956 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-04 00:35:57.991970 | orchestrator | Saturday 04 April 2026 00:35:54 +0000 (0:00:01.706) 0:00:22.627 ******** 2026-04-04 00:35:57.991985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:35:57.992002 | orchestrator | 2026-04-04 00:35:57.992015 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-04 00:35:57.992030 | orchestrator | Saturday 04 April 2026 00:35:55 +0000 (0:00:01.147) 0:00:23.775 ******** 2026-04-04 00:35:57.992043 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.992056 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.992069 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.992083 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:57.992098 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:57.992112 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.992126 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.992141 | orchestrator | 2026-04-04 00:35:57.992155 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-04 00:35:57.992233 | orchestrator | Saturday 04 April 2026 00:35:57 +0000 (0:00:01.698) 0:00:25.473 ******** 2026-04-04 00:35:57.992250 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:57.992264 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:57.992278 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:57.992293 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:57.992348 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:57.992379 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:13.509498 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:13.509619 | orchestrator | 2026-04-04 00:36:13.509637 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-04 00:36:13.509651 | orchestrator | Saturday 04 April 2026 00:35:58 +0000 (0:00:00.640) 0:00:26.113 ******** 2026-04-04 00:36:13.509663 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509674 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509685 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509722 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509734 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509745 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509755 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509766 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509777 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509788 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509798 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509809 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:36:13.509819 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509830 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:36:13.509841 | orchestrator | 2026-04-04 00:36:13.509851 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-04 00:36:13.509862 | orchestrator | Saturday 04 April 2026 00:35:59 +0000 (0:00:01.170) 0:00:27.284 ******** 2026-04-04 00:36:13.509873 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:13.509884 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:13.509895 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:13.509905 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:13.509916 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:13.509928 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:13.509947 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:13.509966 | orchestrator | 2026-04-04 00:36:13.509985 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-04 00:36:13.510004 | orchestrator | Saturday 04 April 2026 00:35:59 +0000 (0:00:00.571) 0:00:27.855 ******** 2026-04-04 00:36:13.510120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-manager, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-04 00:36:13.510141 | orchestrator | 2026-04-04 00:36:13.510185 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-04 00:36:13.510199 | orchestrator | Saturday 04 April 2026 00:36:04 +0000 (0:00:04.359) 0:00:32.215 ******** 2026-04-04 00:36:13.510213 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-04 00:36:13.510251 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-04 00:36:13.510265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-04 00:36:13.510386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-04 00:36:13.510397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-04 00:36:13.510409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-04 00:36:13.510420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-04 00:36:13.510431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-04 00:36:13.510441 | orchestrator | 2026-04-04 00:36:13.510452 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-04 00:36:13.510463 | orchestrator | Saturday 04 April 2026 00:36:09 +0000 (0:00:05.219) 0:00:37.434 ******** 2026-04-04 00:36:13.510474 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-04 00:36:13.510507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510519 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-04 00:36:13.510531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-04 00:36:13.510570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:13.510589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:25.727691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:36:25.727828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-04 00:36:25.727858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-04 00:36:25.727870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-04 00:36:25.727881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-04 00:36:25.727892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-04 00:36:25.727905 | orchestrator | 2026-04-04 00:36:25.727917 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-04 00:36:25.727930 | orchestrator | Saturday 04 April 2026 00:36:14 +0000 (0:00:05.225) 0:00:42.660 ******** 2026-04-04 00:36:25.727942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:36:25.727953 | orchestrator | 2026-04-04 00:36:25.727964 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-04 00:36:25.727975 | orchestrator | Saturday 04 April 2026 00:36:15 +0000 (0:00:01.227) 0:00:43.887 ******** 2026-04-04 00:36:25.727986 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:25.727998 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:36:25.728009 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:36:25.728048 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:36:25.728059 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:36:25.728070 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:25.728080 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:25.728091 | orchestrator | 2026-04-04 00:36:25.728119 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-04 00:36:25.728186 | orchestrator | Saturday 04 April 2026 00:36:16 +0000 (0:00:00.941) 0:00:44.828 ******** 2026-04-04 00:36:25.728206 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728227 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728247 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728268 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728282 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728295 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728307 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728320 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728333 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:25.728346 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728358 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728371 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728383 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728395 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:25.728408 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728421 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728433 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728446 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728478 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:25.728496 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728515 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728535 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728554 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728578 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:25.728599 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728617 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728635 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728654 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728674 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:25.728693 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:25.728704 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:36:25.728716 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:36:25.728726 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:36:25.728737 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:36:25.728762 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:25.728773 | orchestrator | 2026-04-04 00:36:25.728783 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-04 00:36:25.728794 | orchestrator | Saturday 04 April 2026 00:36:17 +0000 (0:00:00.896) 0:00:45.725 ******** 2026-04-04 00:36:25.728806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:36:25.728817 | orchestrator | 2026-04-04 00:36:25.728829 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-04 00:36:25.728839 | orchestrator | Saturday 04 April 2026 00:36:18 +0000 (0:00:01.214) 0:00:46.939 ******** 2026-04-04 00:36:25.728850 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:25.728861 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:25.728871 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:25.728882 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:25.728893 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:25.728904 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:25.728914 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:25.728925 | orchestrator | 2026-04-04 00:36:25.728936 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-04 00:36:25.728947 | orchestrator | Saturday 04 April 2026 00:36:19 +0000 (0:00:00.611) 0:00:47.551 ******** 2026-04-04 00:36:25.728957 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:25.728968 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:25.728978 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:25.728989 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:25.728999 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:25.729010 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:25.729020 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:25.729039 | orchestrator | 2026-04-04 00:36:25.729050 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-04 00:36:25.729061 | orchestrator | Saturday 04 April 2026 00:36:20 +0000 (0:00:00.758) 0:00:48.309 ******** 2026-04-04 00:36:25.729071 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:25.729082 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:25.729093 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:25.729103 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:25.729114 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:25.729152 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:25.729170 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:25.729189 | orchestrator | 2026-04-04 00:36:25.729205 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-04 00:36:25.729223 | orchestrator | Saturday 04 April 2026 00:36:20 +0000 (0:00:00.595) 0:00:48.905 ******** 2026-04-04 00:36:25.729242 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:36:25.729261 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:25.729281 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:36:25.729299 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:36:25.729311 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:36:25.729321 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:25.729332 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:25.729343 | orchestrator | 2026-04-04 00:36:25.729354 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-04 00:36:25.729365 | orchestrator | Saturday 04 April 2026 00:36:22 +0000 (0:00:01.706) 0:00:50.611 ******** 2026-04-04 00:36:25.729377 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:25.729395 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:36:25.729412 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:36:25.729428 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:36:25.729446 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:36:25.729463 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:25.729492 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:25.729509 | orchestrator | 2026-04-04 00:36:25.729527 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-04 00:36:25.729545 | orchestrator | Saturday 04 April 2026 00:36:23 +0000 (0:00:01.087) 0:00:51.699 ******** 2026-04-04 00:36:25.729563 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:25.729581 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:36:25.729601 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:36:25.729620 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:36:25.729637 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:36:25.729648 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:25.729659 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:25.729670 | orchestrator | 2026-04-04 00:36:25.729693 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-04 00:36:27.297926 | orchestrator | Saturday 04 April 2026 00:36:25 +0000 (0:00:02.009) 0:00:53.708 ******** 2026-04-04 00:36:27.298077 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:27.298095 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:27.298108 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:27.298120 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:27.298171 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:27.298184 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:27.298195 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:27.298206 | orchestrator | 2026-04-04 00:36:27.298224 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-04 00:36:27.298242 | orchestrator | Saturday 04 April 2026 00:36:26 +0000 (0:00:00.754) 0:00:54.463 ******** 2026-04-04 00:36:27.298261 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:36:27.298278 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:36:27.298295 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:36:27.298315 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:36:27.298354 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:36:27.298378 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:36:27.298389 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:36:27.298400 | orchestrator | 2026-04-04 00:36:27.298412 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:36:27.298424 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 00:36:27.298437 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298448 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298459 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298472 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298485 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298545 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:36:27.298564 | orchestrator | 2026-04-04 00:36:27.298584 | orchestrator | 2026-04-04 00:36:27.298603 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:36:27.298622 | orchestrator | Saturday 04 April 2026 00:36:26 +0000 (0:00:00.528) 0:00:54.991 ******** 2026-04-04 00:36:27.298640 | orchestrator | =============================================================================== 2026-04-04 00:36:27.298658 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.23s 2026-04-04 00:36:27.298713 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.22s 2026-04-04 00:36:27.298736 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.36s 2026-04-04 00:36:27.298757 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2026-04-04 00:36:27.298775 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.56s 2026-04-04 00:36:27.298793 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-04-04 00:36:27.298810 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.01s 2026-04-04 00:36:27.298828 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.73s 2026-04-04 00:36:27.298845 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-04-04 00:36:27.298863 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.71s 2026-04-04 00:36:27.298881 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.70s 2026-04-04 00:36:27.298900 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-04-04 00:36:27.298918 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.58s 2026-04-04 00:36:27.298936 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.23s 2026-04-04 00:36:27.298954 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.21s 2026-04-04 00:36:27.298973 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2026-04-04 00:36:27.298990 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.15s 2026-04-04 00:36:27.299008 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.13s 2026-04-04 00:36:27.299026 | orchestrator | osism.commons.network : Create required directories --------------------- 1.12s 2026-04-04 00:36:27.299045 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.09s 2026-04-04 00:36:27.486229 | orchestrator | + osism apply wireguard 2026-04-04 00:36:38.747409 | orchestrator | 2026-04-04 00:36:38 | INFO  | Prepare task for execution of wireguard. 2026-04-04 00:36:38.820453 | orchestrator | 2026-04-04 00:36:38 | INFO  | Task 90bc09a8-8d82-47bb-a47c-302fe66272ed (wireguard) was prepared for execution. 2026-04-04 00:36:38.820545 | orchestrator | 2026-04-04 00:36:38 | INFO  | It takes a moment until task 90bc09a8-8d82-47bb-a47c-302fe66272ed (wireguard) has been started and output is visible here. 2026-04-04 00:36:57.299249 | orchestrator | 2026-04-04 00:36:57.299414 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-04 00:36:57.299442 | orchestrator | 2026-04-04 00:36:57.299461 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-04 00:36:57.299477 | orchestrator | Saturday 04 April 2026 00:36:42 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-04-04 00:36:57.299493 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:57.299512 | orchestrator | 2026-04-04 00:36:57.299529 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-04 00:36:57.299546 | orchestrator | Saturday 04 April 2026 00:36:43 +0000 (0:00:01.750) 0:00:02.047 ******** 2026-04-04 00:36:57.299558 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.299580 | orchestrator | 2026-04-04 00:36:57.299591 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-04 00:36:57.299601 | orchestrator | Saturday 04 April 2026 00:36:49 +0000 (0:00:06.157) 0:00:08.205 ******** 2026-04-04 00:36:57.299610 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.299620 | orchestrator | 2026-04-04 00:36:57.299630 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-04 00:36:57.299640 | orchestrator | Saturday 04 April 2026 00:36:50 +0000 (0:00:00.531) 0:00:08.737 ******** 2026-04-04 00:36:57.299649 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.299687 | orchestrator | 2026-04-04 00:36:57.299698 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-04 00:36:57.299724 | orchestrator | Saturday 04 April 2026 00:36:50 +0000 (0:00:00.402) 0:00:09.139 ******** 2026-04-04 00:36:57.299734 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:57.299744 | orchestrator | 2026-04-04 00:36:57.299754 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-04 00:36:57.299763 | orchestrator | Saturday 04 April 2026 00:36:51 +0000 (0:00:00.526) 0:00:09.665 ******** 2026-04-04 00:36:57.299773 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:57.299783 | orchestrator | 2026-04-04 00:36:57.299795 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-04 00:36:57.299807 | orchestrator | Saturday 04 April 2026 00:36:51 +0000 (0:00:00.397) 0:00:10.063 ******** 2026-04-04 00:36:57.299818 | orchestrator | ok: [testbed-manager] 2026-04-04 00:36:57.299830 | orchestrator | 2026-04-04 00:36:57.299841 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-04 00:36:57.299852 | orchestrator | Saturday 04 April 2026 00:36:52 +0000 (0:00:00.441) 0:00:10.504 ******** 2026-04-04 00:36:57.299863 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.299874 | orchestrator | 2026-04-04 00:36:57.299886 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-04 00:36:57.299897 | orchestrator | Saturday 04 April 2026 00:36:53 +0000 (0:00:01.162) 0:00:11.666 ******** 2026-04-04 00:36:57.299909 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:36:57.299920 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.299937 | orchestrator | 2026-04-04 00:36:57.299958 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-04 00:36:57.299978 | orchestrator | Saturday 04 April 2026 00:36:54 +0000 (0:00:00.899) 0:00:12.566 ******** 2026-04-04 00:36:57.299995 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.300007 | orchestrator | 2026-04-04 00:36:57.300018 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-04 00:36:57.300035 | orchestrator | Saturday 04 April 2026 00:36:56 +0000 (0:00:01.969) 0:00:14.535 ******** 2026-04-04 00:36:57.300046 | orchestrator | changed: [testbed-manager] 2026-04-04 00:36:57.300058 | orchestrator | 2026-04-04 00:36:57.300069 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:36:57.300103 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:57.300117 | orchestrator | 2026-04-04 00:36:57.300128 | orchestrator | 2026-04-04 00:36:57.300140 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:36:57.300150 | orchestrator | Saturday 04 April 2026 00:36:57 +0000 (0:00:00.832) 0:00:15.368 ******** 2026-04-04 00:36:57.300162 | orchestrator | =============================================================================== 2026-04-04 00:36:57.300172 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.16s 2026-04-04 00:36:57.300181 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.97s 2026-04-04 00:36:57.300191 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2026-04-04 00:36:57.300200 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2026-04-04 00:36:57.300210 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-04-04 00:36:57.300219 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.83s 2026-04-04 00:36:57.300229 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-04-04 00:36:57.300238 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-04 00:36:57.300248 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-04-04 00:36:57.300257 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-04-04 00:36:57.300275 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-04-04 00:36:57.467556 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-04 00:36:57.504788 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-04 00:36:57.504918 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-04 00:36:57.576421 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 209 0 --:--:-- --:--:-- --:--:-- 211 2026-04-04 00:36:57.585473 | orchestrator | + osism apply --environment custom workarounds 2026-04-04 00:36:58.807249 | orchestrator | 2026-04-04 00:36:58 | INFO  | Trying to run play workarounds in environment custom 2026-04-04 00:37:08.833786 | orchestrator | 2026-04-04 00:37:08 | INFO  | Prepare task for execution of workarounds. 2026-04-04 00:37:08.904132 | orchestrator | 2026-04-04 00:37:08 | INFO  | Task 2dcf2872-46d2-4442-bd6e-a0f4eedbff18 (workarounds) was prepared for execution. 2026-04-04 00:37:08.904233 | orchestrator | 2026-04-04 00:37:08 | INFO  | It takes a moment until task 2dcf2872-46d2-4442-bd6e-a0f4eedbff18 (workarounds) has been started and output is visible here. 2026-04-04 00:37:32.120829 | orchestrator | 2026-04-04 00:37:32.120942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:37:32.120958 | orchestrator | 2026-04-04 00:37:32.120969 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-04 00:37:32.120981 | orchestrator | Saturday 04 April 2026 00:37:11 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-04-04 00:37:32.120991 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121002 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121011 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121021 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121031 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121067 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121077 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-04 00:37:32.121087 | orchestrator | 2026-04-04 00:37:32.121097 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-04 00:37:32.121107 | orchestrator | 2026-04-04 00:37:32.121116 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-04 00:37:32.121126 | orchestrator | Saturday 04 April 2026 00:37:12 +0000 (0:00:00.612) 0:00:00.789 ******** 2026-04-04 00:37:32.121136 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:32.121147 | orchestrator | 2026-04-04 00:37:32.121157 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-04 00:37:32.121167 | orchestrator | 2026-04-04 00:37:32.121176 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-04 00:37:32.121186 | orchestrator | Saturday 04 April 2026 00:37:14 +0000 (0:00:02.237) 0:00:03.027 ******** 2026-04-04 00:37:32.121196 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:32.121206 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:32.121216 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:32.121225 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:32.121235 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:32.121245 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:32.121254 | orchestrator | 2026-04-04 00:37:32.121264 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-04 00:37:32.121274 | orchestrator | 2026-04-04 00:37:32.121284 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-04 00:37:32.121302 | orchestrator | Saturday 04 April 2026 00:37:16 +0000 (0:00:02.171) 0:00:05.198 ******** 2026-04-04 00:37:32.121312 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121342 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121355 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121367 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121378 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121389 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:37:32.121400 | orchestrator | 2026-04-04 00:37:32.121411 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-04 00:37:32.121422 | orchestrator | Saturday 04 April 2026 00:37:18 +0000 (0:00:01.341) 0:00:06.539 ******** 2026-04-04 00:37:32.121434 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:32.121445 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:32.121456 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:32.121467 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:32.121478 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:32.121489 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:32.121499 | orchestrator | 2026-04-04 00:37:32.121510 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-04 00:37:32.121522 | orchestrator | Saturday 04 April 2026 00:37:21 +0000 (0:00:03.645) 0:00:10.185 ******** 2026-04-04 00:37:32.121533 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:37:32.121544 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:37:32.121554 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:37:32.121565 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:37:32.121576 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:37:32.121587 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:37:32.121598 | orchestrator | 2026-04-04 00:37:32.121609 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-04 00:37:32.121620 | orchestrator | 2026-04-04 00:37:32.121631 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-04 00:37:32.121642 | orchestrator | Saturday 04 April 2026 00:37:22 +0000 (0:00:00.452) 0:00:10.637 ******** 2026-04-04 00:37:32.121653 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:32.121664 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:32.121675 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:32.121687 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:32.121698 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:32.121709 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:32.121720 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:32.121729 | orchestrator | 2026-04-04 00:37:32.121739 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-04 00:37:32.121749 | orchestrator | Saturday 04 April 2026 00:37:24 +0000 (0:00:01.588) 0:00:12.225 ******** 2026-04-04 00:37:32.121758 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:32.121768 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:32.121777 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:32.121787 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:32.121797 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:32.121806 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:32.121832 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:32.121842 | orchestrator | 2026-04-04 00:37:32.121852 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-04 00:37:32.121862 | orchestrator | Saturday 04 April 2026 00:37:25 +0000 (0:00:01.401) 0:00:13.626 ******** 2026-04-04 00:37:32.121872 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:32.121881 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:32.121891 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:32.121908 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:32.121918 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:32.121927 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:32.121937 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:32.121946 | orchestrator | 2026-04-04 00:37:32.121956 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-04 00:37:32.121966 | orchestrator | Saturday 04 April 2026 00:37:27 +0000 (0:00:01.633) 0:00:15.260 ******** 2026-04-04 00:37:32.121976 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:32.121985 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:32.121995 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:32.122005 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:32.122120 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:32.122135 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:32.122145 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:32.122155 | orchestrator | 2026-04-04 00:37:32.122164 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-04 00:37:32.122174 | orchestrator | Saturday 04 April 2026 00:37:28 +0000 (0:00:01.490) 0:00:16.750 ******** 2026-04-04 00:37:32.122184 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:37:32.122193 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:37:32.122203 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:37:32.122213 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:37:32.122222 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:37:32.122232 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:37:32.122241 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:37:32.122251 | orchestrator | 2026-04-04 00:37:32.122260 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-04 00:37:32.122270 | orchestrator | 2026-04-04 00:37:32.122280 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-04 00:37:32.122289 | orchestrator | Saturday 04 April 2026 00:37:29 +0000 (0:00:00.630) 0:00:17.381 ******** 2026-04-04 00:37:32.122299 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:32.122309 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:32.122318 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:32.122328 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:32.122337 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:32.122353 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:32.122362 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:32.122372 | orchestrator | 2026-04-04 00:37:32.122382 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:37:32.122393 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:32.122403 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122413 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122423 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122433 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122442 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122452 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:32.122470 | orchestrator | 2026-04-04 00:37:32.122487 | orchestrator | 2026-04-04 00:37:32.122503 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:37:32.122532 | orchestrator | Saturday 04 April 2026 00:37:32 +0000 (0:00:02.927) 0:00:20.308 ******** 2026-04-04 00:37:32.122549 | orchestrator | =============================================================================== 2026-04-04 00:37:32.122568 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.65s 2026-04-04 00:37:32.122585 | orchestrator | Install python3-docker -------------------------------------------------- 2.93s 2026-04-04 00:37:32.122603 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2026-04-04 00:37:32.122616 | orchestrator | Apply netplan configuration --------------------------------------------- 2.17s 2026-04-04 00:37:32.122626 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-04-04 00:37:32.122636 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2026-04-04 00:37:32.122645 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.49s 2026-04-04 00:37:32.122655 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.40s 2026-04-04 00:37:32.122664 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.34s 2026-04-04 00:37:32.122674 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-04-04 00:37:32.122684 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.61s 2026-04-04 00:37:32.122702 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.45s 2026-04-04 00:37:32.405355 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-04 00:37:43.649508 | orchestrator | 2026-04-04 00:37:43 | INFO  | Prepare task for execution of reboot. 2026-04-04 00:37:43.725878 | orchestrator | 2026-04-04 00:37:43 | INFO  | Task 6d442a42-237b-49b7-968e-10948858a034 (reboot) was prepared for execution. 2026-04-04 00:37:43.725965 | orchestrator | 2026-04-04 00:37:43 | INFO  | It takes a moment until task 6d442a42-237b-49b7-968e-10948858a034 (reboot) has been started and output is visible here. 2026-04-04 00:37:54.402842 | orchestrator | 2026-04-04 00:37:54.402970 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.402988 | orchestrator | 2026-04-04 00:37:54.402999 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403046 | orchestrator | Saturday 04 April 2026 00:37:46 +0000 (0:00:00.220) 0:00:00.220 ******** 2026-04-04 00:37:54.403057 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:37:54.403068 | orchestrator | 2026-04-04 00:37:54.403078 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403088 | orchestrator | Saturday 04 April 2026 00:37:46 +0000 (0:00:00.122) 0:00:00.343 ******** 2026-04-04 00:37:54.403098 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:54.403108 | orchestrator | 2026-04-04 00:37:54.403118 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403128 | orchestrator | Saturday 04 April 2026 00:37:48 +0000 (0:00:01.193) 0:00:01.536 ******** 2026-04-04 00:37:54.403137 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:37:54.403147 | orchestrator | 2026-04-04 00:37:54.403157 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.403167 | orchestrator | 2026-04-04 00:37:54.403176 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403186 | orchestrator | Saturday 04 April 2026 00:37:48 +0000 (0:00:00.096) 0:00:01.633 ******** 2026-04-04 00:37:54.403196 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:37:54.403205 | orchestrator | 2026-04-04 00:37:54.403215 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403225 | orchestrator | Saturday 04 April 2026 00:37:48 +0000 (0:00:00.088) 0:00:01.722 ******** 2026-04-04 00:37:54.403250 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:54.403260 | orchestrator | 2026-04-04 00:37:54.403270 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403301 | orchestrator | Saturday 04 April 2026 00:37:49 +0000 (0:00:00.980) 0:00:02.702 ******** 2026-04-04 00:37:54.403312 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:37:54.403322 | orchestrator | 2026-04-04 00:37:54.403332 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.403341 | orchestrator | 2026-04-04 00:37:54.403351 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403360 | orchestrator | Saturday 04 April 2026 00:37:49 +0000 (0:00:00.093) 0:00:02.796 ******** 2026-04-04 00:37:54.403370 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:37:54.403379 | orchestrator | 2026-04-04 00:37:54.403390 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403401 | orchestrator | Saturday 04 April 2026 00:37:49 +0000 (0:00:00.086) 0:00:02.882 ******** 2026-04-04 00:37:54.403412 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:54.403424 | orchestrator | 2026-04-04 00:37:54.403435 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403447 | orchestrator | Saturday 04 April 2026 00:37:50 +0000 (0:00:01.024) 0:00:03.907 ******** 2026-04-04 00:37:54.403458 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:37:54.403470 | orchestrator | 2026-04-04 00:37:54.403481 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.403581 | orchestrator | 2026-04-04 00:37:54.403591 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403601 | orchestrator | Saturday 04 April 2026 00:37:50 +0000 (0:00:00.097) 0:00:04.004 ******** 2026-04-04 00:37:54.403611 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:37:54.403620 | orchestrator | 2026-04-04 00:37:54.403630 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403640 | orchestrator | Saturday 04 April 2026 00:37:50 +0000 (0:00:00.084) 0:00:04.089 ******** 2026-04-04 00:37:54.403650 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:54.403659 | orchestrator | 2026-04-04 00:37:54.403669 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403679 | orchestrator | Saturday 04 April 2026 00:37:51 +0000 (0:00:00.996) 0:00:05.086 ******** 2026-04-04 00:37:54.403688 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:37:54.403698 | orchestrator | 2026-04-04 00:37:54.403708 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.403717 | orchestrator | 2026-04-04 00:37:54.403727 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403737 | orchestrator | Saturday 04 April 2026 00:37:51 +0000 (0:00:00.108) 0:00:05.194 ******** 2026-04-04 00:37:54.403747 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:37:54.403756 | orchestrator | 2026-04-04 00:37:54.403766 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403776 | orchestrator | Saturday 04 April 2026 00:37:51 +0000 (0:00:00.088) 0:00:05.282 ******** 2026-04-04 00:37:54.403786 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:54.403795 | orchestrator | 2026-04-04 00:37:54.403805 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403815 | orchestrator | Saturday 04 April 2026 00:37:52 +0000 (0:00:01.067) 0:00:06.350 ******** 2026-04-04 00:37:54.403824 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:37:54.403834 | orchestrator | 2026-04-04 00:37:54.403844 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:37:54.403854 | orchestrator | 2026-04-04 00:37:54.403863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:37:54.403873 | orchestrator | Saturday 04 April 2026 00:37:53 +0000 (0:00:00.111) 0:00:06.462 ******** 2026-04-04 00:37:54.403883 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:37:54.403892 | orchestrator | 2026-04-04 00:37:54.403902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:37:54.403922 | orchestrator | Saturday 04 April 2026 00:37:53 +0000 (0:00:00.087) 0:00:06.550 ******** 2026-04-04 00:37:54.403932 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:54.403942 | orchestrator | 2026-04-04 00:37:54.403952 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:37:54.403962 | orchestrator | Saturday 04 April 2026 00:37:54 +0000 (0:00:00.972) 0:00:07.522 ******** 2026-04-04 00:37:54.403997 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:37:54.404037 | orchestrator | 2026-04-04 00:37:54.404052 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:37:54.404069 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404085 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404100 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404116 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404132 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404148 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:37:54.404164 | orchestrator | 2026-04-04 00:37:54.404180 | orchestrator | 2026-04-04 00:37:54.404195 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:37:54.404221 | orchestrator | Saturday 04 April 2026 00:37:54 +0000 (0:00:00.034) 0:00:07.557 ******** 2026-04-04 00:37:54.404237 | orchestrator | =============================================================================== 2026-04-04 00:37:54.404254 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.23s 2026-04-04 00:37:54.404270 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.56s 2026-04-04 00:37:54.404286 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-04 00:37:54.569629 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-04 00:38:05.848928 | orchestrator | 2026-04-04 00:38:05 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-04 00:38:05.925946 | orchestrator | 2026-04-04 00:38:05 | INFO  | Task 6c7bffe9-fe90-4c02-85eb-fa6cdd8324e6 (wait-for-connection) was prepared for execution. 2026-04-04 00:38:05.926086 | orchestrator | 2026-04-04 00:38:05 | INFO  | It takes a moment until task 6c7bffe9-fe90-4c02-85eb-fa6cdd8324e6 (wait-for-connection) has been started and output is visible here. 2026-04-04 00:38:20.831566 | orchestrator | 2026-04-04 00:38:20.831659 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-04 00:38:20.831673 | orchestrator | 2026-04-04 00:38:20.831683 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-04 00:38:20.831692 | orchestrator | Saturday 04 April 2026 00:38:09 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-04-04 00:38:20.831702 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:38:20.831712 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:38:20.831720 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:38:20.831730 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:38:20.831738 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:38:20.831747 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:38:20.831756 | orchestrator | 2026-04-04 00:38:20.831765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:38:20.831774 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831821 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831831 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831840 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831848 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831857 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:20.831866 | orchestrator | 2026-04-04 00:38:20.831874 | orchestrator | 2026-04-04 00:38:20.831883 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:38:20.831892 | orchestrator | Saturday 04 April 2026 00:38:20 +0000 (0:00:11.582) 0:00:11.880 ******** 2026-04-04 00:38:20.831900 | orchestrator | =============================================================================== 2026-04-04 00:38:20.831909 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-04-04 00:38:20.956043 | orchestrator | + osism apply hddtemp 2026-04-04 00:38:32.089270 | orchestrator | 2026-04-04 00:38:32 | INFO  | Prepare task for execution of hddtemp. 2026-04-04 00:38:32.154522 | orchestrator | 2026-04-04 00:38:32 | INFO  | Task 5d39e6d1-0aba-4623-a9a4-7439d1a1a11e (hddtemp) was prepared for execution. 2026-04-04 00:38:32.154607 | orchestrator | 2026-04-04 00:38:32 | INFO  | It takes a moment until task 5d39e6d1-0aba-4623-a9a4-7439d1a1a11e (hddtemp) has been started and output is visible here. 2026-04-04 00:38:57.970248 | orchestrator | 2026-04-04 00:38:57.970375 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-04 00:38:57.970392 | orchestrator | 2026-04-04 00:38:57.970403 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-04 00:38:57.971400 | orchestrator | Saturday 04 April 2026 00:38:35 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-04-04 00:38:57.971480 | orchestrator | ok: [testbed-manager] 2026-04-04 00:38:57.971495 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:38:57.971505 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:38:57.971515 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:38:57.971525 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:38:57.971535 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:38:57.971545 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:38:57.971555 | orchestrator | 2026-04-04 00:38:57.971566 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-04 00:38:57.971576 | orchestrator | Saturday 04 April 2026 00:38:35 +0000 (0:00:00.456) 0:00:00.719 ******** 2026-04-04 00:38:57.971588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:38:57.971600 | orchestrator | 2026-04-04 00:38:57.971611 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-04 00:38:57.971620 | orchestrator | Saturday 04 April 2026 00:38:36 +0000 (0:00:00.866) 0:00:01.585 ******** 2026-04-04 00:38:57.971648 | orchestrator | ok: [testbed-manager] 2026-04-04 00:38:57.971658 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:38:57.971668 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:38:57.971678 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:38:57.971687 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:38:57.971697 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:38:57.971707 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:38:57.971716 | orchestrator | 2026-04-04 00:38:57.971727 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-04 00:38:57.971761 | orchestrator | Saturday 04 April 2026 00:38:38 +0000 (0:00:02.476) 0:00:04.061 ******** 2026-04-04 00:38:57.971771 | orchestrator | changed: [testbed-manager] 2026-04-04 00:38:57.971782 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:38:57.971792 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:38:57.971802 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:38:57.971811 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:38:57.971821 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:38:57.971830 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:38:57.971857 | orchestrator | 2026-04-04 00:38:57.971877 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-04 00:38:57.971887 | orchestrator | Saturday 04 April 2026 00:38:39 +0000 (0:00:00.866) 0:00:04.928 ******** 2026-04-04 00:38:57.971896 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:38:57.971906 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:38:57.971916 | orchestrator | ok: [testbed-manager] 2026-04-04 00:38:57.971926 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:38:57.971985 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:38:57.971995 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:38:57.972005 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:38:57.972015 | orchestrator | 2026-04-04 00:38:57.972024 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-04 00:38:57.972034 | orchestrator | Saturday 04 April 2026 00:38:40 +0000 (0:00:01.245) 0:00:06.174 ******** 2026-04-04 00:38:57.972044 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:38:57.972054 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:38:57.972063 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:38:57.972073 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:38:57.972083 | orchestrator | changed: [testbed-manager] 2026-04-04 00:38:57.972093 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:38:57.972102 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:38:57.972112 | orchestrator | 2026-04-04 00:38:57.972122 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-04 00:38:57.972132 | orchestrator | Saturday 04 April 2026 00:38:41 +0000 (0:00:00.549) 0:00:06.724 ******** 2026-04-04 00:38:57.972142 | orchestrator | changed: [testbed-manager] 2026-04-04 00:38:57.972151 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:38:57.972161 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:38:57.972170 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:38:57.972180 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:38:57.972190 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:38:57.972200 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:38:57.972209 | orchestrator | 2026-04-04 00:38:57.972219 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-04 00:38:57.972229 | orchestrator | Saturday 04 April 2026 00:38:55 +0000 (0:00:13.507) 0:00:20.232 ******** 2026-04-04 00:38:57.972239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:38:57.972249 | orchestrator | 2026-04-04 00:38:57.972259 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-04 00:38:57.972269 | orchestrator | Saturday 04 April 2026 00:38:56 +0000 (0:00:01.021) 0:00:21.254 ******** 2026-04-04 00:38:57.972279 | orchestrator | changed: [testbed-manager] 2026-04-04 00:38:57.972288 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:38:57.972298 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:38:57.972308 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:38:57.972317 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:38:57.972327 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:38:57.972336 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:38:57.972346 | orchestrator | 2026-04-04 00:38:57.972356 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:38:57.972375 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:38:57.972412 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972423 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972433 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972443 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972453 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972463 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:57.972472 | orchestrator | 2026-04-04 00:38:57.972482 | orchestrator | 2026-04-04 00:38:57.972492 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:38:57.972502 | orchestrator | Saturday 04 April 2026 00:38:57 +0000 (0:00:01.690) 0:00:22.944 ******** 2026-04-04 00:38:57.972518 | orchestrator | =============================================================================== 2026-04-04 00:38:57.972528 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.51s 2026-04-04 00:38:57.972537 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.48s 2026-04-04 00:38:57.972547 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.69s 2026-04-04 00:38:57.972557 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2026-04-04 00:38:57.972567 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.02s 2026-04-04 00:38:57.972577 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.87s 2026-04-04 00:38:57.972586 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.87s 2026-04-04 00:38:57.972596 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.55s 2026-04-04 00:38:57.972606 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.46s 2026-04-04 00:38:58.087152 | orchestrator | ++ semver latest 7.1.1 2026-04-04 00:38:58.129119 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:38:58.129216 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:38:58.129233 | orchestrator | + sudo systemctl restart manager.service 2026-04-04 00:39:11.264314 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 00:39:11.264407 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-04 00:39:11.264421 | orchestrator | + local max_attempts=60 2026-04-04 00:39:11.264431 | orchestrator | + local name=ceph-ansible 2026-04-04 00:39:11.264439 | orchestrator | + local attempt_num=1 2026-04-04 00:39:11.264449 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:11.299852 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:11.299961 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:11.299977 | orchestrator | + sleep 5 2026-04-04 00:39:16.306107 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:16.350997 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:16.351115 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:16.351142 | orchestrator | + sleep 5 2026-04-04 00:39:21.353439 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:21.382362 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:21.382476 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:21.382492 | orchestrator | + sleep 5 2026-04-04 00:39:26.385719 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:26.428146 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:26.428267 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:26.428294 | orchestrator | + sleep 5 2026-04-04 00:39:31.432316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:31.464788 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:31.464871 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:31.464885 | orchestrator | + sleep 5 2026-04-04 00:39:36.469266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:36.500029 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:36.500122 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:36.500135 | orchestrator | + sleep 5 2026-04-04 00:39:41.504093 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:41.544281 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:41.544366 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:41.544379 | orchestrator | + sleep 5 2026-04-04 00:39:46.549088 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:46.572841 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:46.572976 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:46.572993 | orchestrator | + sleep 5 2026-04-04 00:39:51.577423 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:51.608228 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:51.608317 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:51.608334 | orchestrator | + sleep 5 2026-04-04 00:39:56.612102 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:39:56.643218 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:39:56.643324 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:39:56.643341 | orchestrator | + sleep 5 2026-04-04 00:40:01.646424 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:40:01.683784 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:01.683942 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:40:01.683957 | orchestrator | + sleep 5 2026-04-04 00:40:06.688174 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:40:06.721019 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:06.721130 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:40:06.721147 | orchestrator | + sleep 5 2026-04-04 00:40:11.725247 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:40:11.766595 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:11.766711 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:40:11.766727 | orchestrator | + sleep 5 2026-04-04 00:40:16.771746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:40:16.803809 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:16.803975 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-04 00:40:16.803987 | orchestrator | + local max_attempts=60 2026-04-04 00:40:16.803994 | orchestrator | + local name=kolla-ansible 2026-04-04 00:40:16.804008 | orchestrator | + local attempt_num=1 2026-04-04 00:40:16.804584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-04 00:40:16.832094 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:16.832166 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-04 00:40:16.832175 | orchestrator | + local max_attempts=60 2026-04-04 00:40:16.832182 | orchestrator | + local name=osism-ansible 2026-04-04 00:40:16.832189 | orchestrator | + local attempt_num=1 2026-04-04 00:40:16.832660 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-04 00:40:16.867806 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:40:16.867949 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-04 00:40:16.867960 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-04 00:40:17.014654 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-04 00:40:17.136260 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-04 00:40:17.273705 | orchestrator | ARA in osism-ansible already disabled. 2026-04-04 00:40:17.410942 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-04 00:40:17.411053 | orchestrator | + osism apply gather-facts 2026-04-04 00:40:28.583195 | orchestrator | 2026-04-04 00:40:28 | INFO  | Prepare task for execution of gather-facts. 2026-04-04 00:40:28.654403 | orchestrator | 2026-04-04 00:40:28 | INFO  | Task 38d244d4-dfdd-462c-851b-f1594492ad88 (gather-facts) was prepared for execution. 2026-04-04 00:40:28.654496 | orchestrator | 2026-04-04 00:40:28 | INFO  | It takes a moment until task 38d244d4-dfdd-462c-851b-f1594492ad88 (gather-facts) has been started and output is visible here. 2026-04-04 00:40:37.456085 | orchestrator | 2026-04-04 00:40:37.456172 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:40:37.456181 | orchestrator | 2026-04-04 00:40:37.456185 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:40:37.456190 | orchestrator | Saturday 04 April 2026 00:40:31 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-04-04 00:40:37.456194 | orchestrator | ok: [testbed-manager] 2026-04-04 00:40:37.456199 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:40:37.456203 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:40:37.456208 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:40:37.456211 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:40:37.456215 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:40:37.456219 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:40:37.456223 | orchestrator | 2026-04-04 00:40:37.456228 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:40:37.456231 | orchestrator | 2026-04-04 00:40:37.456235 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:40:37.456239 | orchestrator | Saturday 04 April 2026 00:40:36 +0000 (0:00:05.654) 0:00:05.863 ******** 2026-04-04 00:40:37.456243 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:40:37.456248 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:40:37.456251 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:40:37.456255 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:40:37.456259 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:37.456263 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:37.456267 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:37.456270 | orchestrator | 2026-04-04 00:40:37.456274 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:40:37.456278 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456283 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456287 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456291 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456294 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456298 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456302 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:40:37.456306 | orchestrator | 2026-04-04 00:40:37.456310 | orchestrator | 2026-04-04 00:40:37.456314 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:40:37.456317 | orchestrator | Saturday 04 April 2026 00:40:37 +0000 (0:00:00.550) 0:00:06.413 ******** 2026-04-04 00:40:37.456321 | orchestrator | =============================================================================== 2026-04-04 00:40:37.456325 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2026-04-04 00:40:37.456344 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-04 00:40:37.585200 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-04 00:40:37.601020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-04 00:40:37.617886 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-04 00:40:37.628268 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-04 00:40:37.644536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-04 00:40:37.663438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-04 00:40:37.684602 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-04 00:40:37.699153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-04 00:40:37.721603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-04 00:40:37.737307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-04 00:40:37.753485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-04 00:40:37.770315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-04 00:40:37.785535 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-04 00:40:37.799409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-04 00:40:37.816788 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-04 00:40:37.834212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-04 00:40:37.852564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-04 00:40:37.871465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-04 00:40:37.890541 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-04 00:40:37.908339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-04 00:40:37.926779 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-04 00:40:37.944468 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-04 00:40:37.963243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-04 00:40:37.982260 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-04 00:40:38.085415 | orchestrator | ok: Runtime: 0:23:22.736029 2026-04-04 00:40:38.189031 | 2026-04-04 00:40:38.189174 | TASK [Deploy services] 2026-04-04 00:40:38.721793 | orchestrator | skipping: Conditional result was False 2026-04-04 00:40:38.740496 | 2026-04-04 00:40:38.740682 | TASK [Deploy in a nutshell] 2026-04-04 00:40:39.416039 | orchestrator | 2026-04-04 00:40:39.416191 | orchestrator | # PULL IMAGES 2026-04-04 00:40:39.416204 | orchestrator | 2026-04-04 00:40:39.416212 | orchestrator | + set -e 2026-04-04 00:40:39.416223 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:40:39.416235 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:40:39.416244 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:40:39.416272 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:40:39.416285 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:40:39.416293 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:40:39.416300 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:40:39.416310 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:40:39.416317 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:40:39.416327 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:40:39.416334 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:40:39.416343 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:40:39.416349 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:40:39.416357 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:40:39.416364 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 00:40:39.416370 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 00:40:39.416376 | orchestrator | ++ export ARA=false 2026-04-04 00:40:39.416382 | orchestrator | ++ ARA=false 2026-04-04 00:40:39.416388 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:40:39.416394 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:40:39.416401 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:40:39.416407 | orchestrator | ++ TEMPEST=true 2026-04-04 00:40:39.416413 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:40:39.416419 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:40:39.416426 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:40:39.416432 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 00:40:39.416439 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:40:39.416445 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:40:39.416451 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:40:39.416458 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:40:39.416464 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:40:39.416470 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:40:39.416476 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:40:39.416482 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:40:39.416488 | orchestrator | + echo 2026-04-04 00:40:39.416495 | orchestrator | + echo '# PULL IMAGES' 2026-04-04 00:40:39.416531 | orchestrator | + echo 2026-04-04 00:40:39.416676 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:40:39.467000 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:40:39.467072 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:40:39.467078 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-04 00:40:40.602726 | orchestrator | 2026-04-04 00:40:40 | INFO  | Trying to run play pull-images in environment custom 2026-04-04 00:40:50.708216 | orchestrator | 2026-04-04 00:40:50 | INFO  | Prepare task for execution of pull-images. 2026-04-04 00:40:50.756301 | orchestrator | 2026-04-04 00:40:50 | INFO  | Task 33176a19-30ef-4a6f-b09a-0fa34feea804 (pull-images) was prepared for execution. 2026-04-04 00:40:50.756380 | orchestrator | 2026-04-04 00:40:50 | INFO  | Task 33176a19-30ef-4a6f-b09a-0fa34feea804 is running in background. No more output. Check ARA for logs. 2026-04-04 00:40:52.062559 | orchestrator | 2026-04-04 00:40:52 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-04 00:41:02.153380 | orchestrator | 2026-04-04 00:41:02 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-04 00:41:02.222394 | orchestrator | 2026-04-04 00:41:02 | INFO  | Task 98bac538-626b-4cf5-ba75-33b43c45b329 (wipe-partitions) was prepared for execution. 2026-04-04 00:41:02.222460 | orchestrator | 2026-04-04 00:41:02 | INFO  | It takes a moment until task 98bac538-626b-4cf5-ba75-33b43c45b329 (wipe-partitions) has been started and output is visible here. 2026-04-04 00:41:13.372478 | orchestrator | 2026-04-04 00:41:13.372598 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-04 00:41:13.372612 | orchestrator | 2026-04-04 00:41:13.372620 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-04 00:41:13.372632 | orchestrator | Saturday 04 April 2026 00:41:05 +0000 (0:00:00.119) 0:00:00.119 ******** 2026-04-04 00:41:13.372659 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:41:13.372667 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:41:13.372674 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:41:13.372681 | orchestrator | 2026-04-04 00:41:13.372688 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-04 00:41:13.372694 | orchestrator | Saturday 04 April 2026 00:41:06 +0000 (0:00:00.880) 0:00:01.000 ******** 2026-04-04 00:41:13.372704 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:13.372710 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:13.372716 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:41:13.372722 | orchestrator | 2026-04-04 00:41:13.372728 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-04 00:41:13.372735 | orchestrator | Saturday 04 April 2026 00:41:06 +0000 (0:00:00.217) 0:00:01.217 ******** 2026-04-04 00:41:13.372741 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:41:13.372748 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:13.372754 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:41:13.372761 | orchestrator | 2026-04-04 00:41:13.372767 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-04 00:41:13.372773 | orchestrator | Saturday 04 April 2026 00:41:06 +0000 (0:00:00.553) 0:00:01.771 ******** 2026-04-04 00:41:13.372780 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:13.372786 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:13.372792 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:41:13.372881 | orchestrator | 2026-04-04 00:41:13.372888 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-04 00:41:13.372892 | orchestrator | Saturday 04 April 2026 00:41:07 +0000 (0:00:00.208) 0:00:01.980 ******** 2026-04-04 00:41:13.372915 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:41:13.372925 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:41:13.372932 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:41:13.372939 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:41:13.372945 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:41:13.372951 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:41:13.372971 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:41:13.372977 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:41:13.372983 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:41:13.372990 | orchestrator | 2026-04-04 00:41:13.372997 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-04 00:41:13.373003 | orchestrator | Saturday 04 April 2026 00:41:08 +0000 (0:00:01.286) 0:00:03.267 ******** 2026-04-04 00:41:13.373010 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:41:13.373016 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:41:13.373023 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:41:13.373044 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:41:13.373051 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:41:13.373057 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:41:13.373063 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:41:13.373070 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:41:13.373076 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:41:13.373082 | orchestrator | 2026-04-04 00:41:13.373089 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-04 00:41:13.373095 | orchestrator | Saturday 04 April 2026 00:41:09 +0000 (0:00:01.355) 0:00:04.623 ******** 2026-04-04 00:41:13.373102 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:41:13.373108 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:41:13.373115 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:41:13.373127 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:41:13.373142 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:41:13.373148 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:41:13.373154 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:41:13.373161 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:41:13.373166 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:41:13.373173 | orchestrator | 2026-04-04 00:41:13.373179 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-04 00:41:13.373185 | orchestrator | Saturday 04 April 2026 00:41:11 +0000 (0:00:02.073) 0:00:06.696 ******** 2026-04-04 00:41:13.373191 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:41:13.373197 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:41:13.373203 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:41:13.373209 | orchestrator | 2026-04-04 00:41:13.373215 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-04 00:41:13.373221 | orchestrator | Saturday 04 April 2026 00:41:12 +0000 (0:00:00.568) 0:00:07.265 ******** 2026-04-04 00:41:13.373228 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:41:13.373234 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:41:13.373240 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:41:13.373248 | orchestrator | 2026-04-04 00:41:13.373255 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:41:13.373263 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:13.373270 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:13.373292 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:13.373299 | orchestrator | 2026-04-04 00:41:13.373306 | orchestrator | 2026-04-04 00:41:13.373312 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:41:13.373319 | orchestrator | Saturday 04 April 2026 00:41:13 +0000 (0:00:00.762) 0:00:08.027 ******** 2026-04-04 00:41:13.373326 | orchestrator | =============================================================================== 2026-04-04 00:41:13.373332 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.07s 2026-04-04 00:41:13.373339 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2026-04-04 00:41:13.373345 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-04-04 00:41:13.373352 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.88s 2026-04-04 00:41:13.373357 | orchestrator | Request device events from the kernel ----------------------------------- 0.76s 2026-04-04 00:41:13.373362 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-04-04 00:41:13.373366 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-04-04 00:41:13.373370 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-04-04 00:41:13.373374 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-04-04 00:41:24.771181 | orchestrator | 2026-04-04 00:41:24 | INFO  | Prepare task for execution of facts. 2026-04-04 00:41:24.837621 | orchestrator | 2026-04-04 00:41:24 | INFO  | Task 237c12b7-02bd-43ee-83ed-add5787487b3 (facts) was prepared for execution. 2026-04-04 00:41:24.837710 | orchestrator | 2026-04-04 00:41:24 | INFO  | It takes a moment until task 237c12b7-02bd-43ee-83ed-add5787487b3 (facts) has been started and output is visible here. 2026-04-04 00:41:35.197276 | orchestrator | 2026-04-04 00:41:35.197354 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 00:41:35.197361 | orchestrator | 2026-04-04 00:41:35.197384 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:41:35.197389 | orchestrator | Saturday 04 April 2026 00:41:27 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-04-04 00:41:35.197393 | orchestrator | ok: [testbed-manager] 2026-04-04 00:41:35.197398 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:41:35.197402 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:41:35.197406 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:41:35.197410 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:35.197414 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:41:35.197417 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:41:35.197421 | orchestrator | 2026-04-04 00:41:35.197438 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:41:35.197442 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:01.157) 0:00:01.406 ******** 2026-04-04 00:41:35.197446 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:41:35.197451 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:41:35.197455 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:41:35.197459 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:41:35.197463 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:35.197466 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:35.197470 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:41:35.197474 | orchestrator | 2026-04-04 00:41:35.197478 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:41:35.197482 | orchestrator | 2026-04-04 00:41:35.197486 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:41:35.197490 | orchestrator | Saturday 04 April 2026 00:41:29 +0000 (0:00:01.055) 0:00:02.462 ******** 2026-04-04 00:41:35.197494 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:41:35.197498 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:41:35.197501 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:41:35.197505 | orchestrator | ok: [testbed-manager] 2026-04-04 00:41:35.197509 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:35.197512 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:41:35.197516 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:41:35.197520 | orchestrator | 2026-04-04 00:41:35.197524 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:41:35.197527 | orchestrator | 2026-04-04 00:41:35.197531 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:41:35.197535 | orchestrator | Saturday 04 April 2026 00:41:34 +0000 (0:00:04.713) 0:00:07.176 ******** 2026-04-04 00:41:35.197539 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:41:35.197543 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:41:35.197547 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:41:35.197550 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:41:35.197554 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:35.197558 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:35.197561 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:41:35.197565 | orchestrator | 2026-04-04 00:41:35.197569 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:41:35.197573 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197578 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197582 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197585 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197589 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197598 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197602 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:41:35.197606 | orchestrator | 2026-04-04 00:41:35.197610 | orchestrator | 2026-04-04 00:41:35.197613 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:41:35.197617 | orchestrator | Saturday 04 April 2026 00:41:34 +0000 (0:00:00.480) 0:00:07.657 ******** 2026-04-04 00:41:35.197621 | orchestrator | =============================================================================== 2026-04-04 00:41:35.197625 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.71s 2026-04-04 00:41:35.197629 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-04-04 00:41:35.197633 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-04-04 00:41:35.197636 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-04-04 00:41:36.633399 | orchestrator | 2026-04-04 00:41:36 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-04 00:41:36.695398 | orchestrator | 2026-04-04 00:41:36 | INFO  | Task c976635a-01e3-4a6d-a190-b2da00411cca (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-04 00:41:36.695469 | orchestrator | 2026-04-04 00:41:36 | INFO  | It takes a moment until task c976635a-01e3-4a6d-a190-b2da00411cca (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-04 00:41:46.948210 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:41:46.948325 | orchestrator | 2.16.14 2026-04-04 00:41:46.948345 | orchestrator | 2026-04-04 00:41:46.948371 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:41:46.948387 | orchestrator | 2026-04-04 00:41:46.948400 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:41:46.948414 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.215) 0:00:00.215 ******** 2026-04-04 00:41:46.948426 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:41:46.948439 | orchestrator | 2026-04-04 00:41:46.948452 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:41:46.948466 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.252) 0:00:00.468 ******** 2026-04-04 00:41:46.948480 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:46.948492 | orchestrator | 2026-04-04 00:41:46.948504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948517 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.187) 0:00:00.655 ******** 2026-04-04 00:41:46.948529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:41:46.948542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:41:46.948554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:41:46.948566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:41:46.948577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:41:46.948585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:41:46.948593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:41:46.948600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:41:46.948608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-04 00:41:46.948615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:41:46.948642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:41:46.948650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:41:46.948657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:41:46.948664 | orchestrator | 2026-04-04 00:41:46.948672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948679 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.302) 0:00:00.958 ******** 2026-04-04 00:41:46.948686 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948693 | orchestrator | 2026-04-04 00:41:46.948701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948708 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.348) 0:00:01.307 ******** 2026-04-04 00:41:46.948715 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948722 | orchestrator | 2026-04-04 00:41:46.948730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948742 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.191) 0:00:01.499 ******** 2026-04-04 00:41:46.948751 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948759 | orchestrator | 2026-04-04 00:41:46.948796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948806 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.163) 0:00:01.662 ******** 2026-04-04 00:41:46.948815 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948823 | orchestrator | 2026-04-04 00:41:46.948832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948841 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.153) 0:00:01.815 ******** 2026-04-04 00:41:46.948849 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948857 | orchestrator | 2026-04-04 00:41:46.948866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948875 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.164) 0:00:01.980 ******** 2026-04-04 00:41:46.948883 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948891 | orchestrator | 2026-04-04 00:41:46.948900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948908 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.172) 0:00:02.153 ******** 2026-04-04 00:41:46.948917 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948925 | orchestrator | 2026-04-04 00:41:46.948934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948943 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.158) 0:00:02.312 ******** 2026-04-04 00:41:46.948951 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.948960 | orchestrator | 2026-04-04 00:41:46.948968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.948976 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.181) 0:00:02.493 ******** 2026-04-04 00:41:46.948985 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79) 2026-04-04 00:41:46.948994 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79) 2026-04-04 00:41:46.949002 | orchestrator | 2026-04-04 00:41:46.949011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.949036 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.373) 0:00:02.866 ******** 2026-04-04 00:41:46.949045 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c) 2026-04-04 00:41:46.949054 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c) 2026-04-04 00:41:46.949062 | orchestrator | 2026-04-04 00:41:46.949070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.949086 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.383) 0:00:03.249 ******** 2026-04-04 00:41:46.949095 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10) 2026-04-04 00:41:46.949103 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10) 2026-04-04 00:41:46.949110 | orchestrator | 2026-04-04 00:41:46.949118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.949125 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.459) 0:00:03.708 ******** 2026-04-04 00:41:46.949132 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903) 2026-04-04 00:41:46.949139 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903) 2026-04-04 00:41:46.949146 | orchestrator | 2026-04-04 00:41:46.949154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:46.949161 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.458) 0:00:04.167 ******** 2026-04-04 00:41:46.949168 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:41:46.949175 | orchestrator | 2026-04-04 00:41:46.949182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949189 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.515) 0:00:04.682 ******** 2026-04-04 00:41:46.949204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:41:46.949212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:41:46.949219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:41:46.949226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:41:46.949233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:41:46.949240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:41:46.949247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:41:46.949254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:41:46.949261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-04 00:41:46.949269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:41:46.949276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:41:46.949283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:41:46.949290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:41:46.949297 | orchestrator | 2026-04-04 00:41:46.949304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949311 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.275) 0:00:04.958 ******** 2026-04-04 00:41:46.949318 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949326 | orchestrator | 2026-04-04 00:41:46.949333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949340 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.181) 0:00:05.139 ******** 2026-04-04 00:41:46.949347 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949354 | orchestrator | 2026-04-04 00:41:46.949361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949368 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.179) 0:00:05.318 ******** 2026-04-04 00:41:46.949376 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949388 | orchestrator | 2026-04-04 00:41:46.949395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949402 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.178) 0:00:05.497 ******** 2026-04-04 00:41:46.949409 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949417 | orchestrator | 2026-04-04 00:41:46.949424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949431 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.205) 0:00:05.703 ******** 2026-04-04 00:41:46.949438 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949445 | orchestrator | 2026-04-04 00:41:46.949456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949463 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.176) 0:00:05.879 ******** 2026-04-04 00:41:46.949470 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949478 | orchestrator | 2026-04-04 00:41:46.949485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:46.949492 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.169) 0:00:06.049 ******** 2026-04-04 00:41:46.949499 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:46.949506 | orchestrator | 2026-04-04 00:41:46.949518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463481 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.162) 0:00:06.211 ******** 2026-04-04 00:41:53.463584 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463594 | orchestrator | 2026-04-04 00:41:53.463602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463609 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.171) 0:00:06.383 ******** 2026-04-04 00:41:53.463647 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-04 00:41:53.463655 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-04 00:41:53.463663 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-04 00:41:53.463668 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-04 00:41:53.463674 | orchestrator | 2026-04-04 00:41:53.463681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463688 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.802) 0:00:07.186 ******** 2026-04-04 00:41:53.463695 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463700 | orchestrator | 2026-04-04 00:41:53.463707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463714 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.171) 0:00:07.358 ******** 2026-04-04 00:41:53.463720 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463726 | orchestrator | 2026-04-04 00:41:53.463732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463738 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.180) 0:00:07.538 ******** 2026-04-04 00:41:53.463744 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463750 | orchestrator | 2026-04-04 00:41:53.463756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:53.463807 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.183) 0:00:07.722 ******** 2026-04-04 00:41:53.463815 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463821 | orchestrator | 2026-04-04 00:41:53.463828 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:41:53.463834 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.187) 0:00:07.909 ******** 2026-04-04 00:41:53.463841 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:41:53.463848 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:41:53.463853 | orchestrator | 2026-04-04 00:41:53.463860 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:41:53.463866 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.164) 0:00:08.073 ******** 2026-04-04 00:41:53.463894 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463900 | orchestrator | 2026-04-04 00:41:53.463906 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:41:53.463912 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.119) 0:00:08.193 ******** 2026-04-04 00:41:53.463918 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463924 | orchestrator | 2026-04-04 00:41:53.463933 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:41:53.463940 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.119) 0:00:08.312 ******** 2026-04-04 00:41:53.463946 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.463953 | orchestrator | 2026-04-04 00:41:53.463959 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:41:53.463965 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.121) 0:00:08.434 ******** 2026-04-04 00:41:53.463971 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:53.463978 | orchestrator | 2026-04-04 00:41:53.463984 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:41:53.463994 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.119) 0:00:08.554 ******** 2026-04-04 00:41:53.464003 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}}) 2026-04-04 00:41:53.464010 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}}) 2026-04-04 00:41:53.464016 | orchestrator | 2026-04-04 00:41:53.464023 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:41:53.464029 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.146) 0:00:08.700 ******** 2026-04-04 00:41:53.464037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}})  2026-04-04 00:41:53.464057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}})  2026-04-04 00:41:53.464064 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464071 | orchestrator | 2026-04-04 00:41:53.464077 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:41:53.464083 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.136) 0:00:08.836 ******** 2026-04-04 00:41:53.464090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}})  2026-04-04 00:41:53.464096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}})  2026-04-04 00:41:53.464102 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464109 | orchestrator | 2026-04-04 00:41:53.464115 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:41:53.464122 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.132) 0:00:08.969 ******** 2026-04-04 00:41:53.464128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}})  2026-04-04 00:41:53.464154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}})  2026-04-04 00:41:53.464161 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464167 | orchestrator | 2026-04-04 00:41:53.464174 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:41:53.464181 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.252) 0:00:09.221 ******** 2026-04-04 00:41:53.464188 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:53.464195 | orchestrator | 2026-04-04 00:41:53.464201 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:41:53.464208 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.125) 0:00:09.346 ******** 2026-04-04 00:41:53.464215 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:53.464230 | orchestrator | 2026-04-04 00:41:53.464250 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:41:53.464258 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.118) 0:00:09.464 ******** 2026-04-04 00:41:53.464265 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464272 | orchestrator | 2026-04-04 00:41:53.464289 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:41:53.464297 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.119) 0:00:09.584 ******** 2026-04-04 00:41:53.464304 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464311 | orchestrator | 2026-04-04 00:41:53.464318 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:41:53.464325 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.115) 0:00:09.700 ******** 2026-04-04 00:41:53.464332 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464338 | orchestrator | 2026-04-04 00:41:53.464346 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:41:53.464353 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.131) 0:00:09.832 ******** 2026-04-04 00:41:53.464360 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:53.464367 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:41:53.464374 | orchestrator |  "sdb": { 2026-04-04 00:41:53.464382 | orchestrator |  "osd_lvm_uuid": "7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3" 2026-04-04 00:41:53.464389 | orchestrator |  }, 2026-04-04 00:41:53.464397 | orchestrator |  "sdc": { 2026-04-04 00:41:53.464404 | orchestrator |  "osd_lvm_uuid": "ecc56a61-ea8b-515f-be54-1cf9bb6e81cf" 2026-04-04 00:41:53.464411 | orchestrator |  } 2026-04-04 00:41:53.464418 | orchestrator |  } 2026-04-04 00:41:53.464425 | orchestrator | } 2026-04-04 00:41:53.464432 | orchestrator | 2026-04-04 00:41:53.464438 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:41:53.464445 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.118) 0:00:09.950 ******** 2026-04-04 00:41:53.464452 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464459 | orchestrator | 2026-04-04 00:41:53.464465 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:41:53.464472 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.096) 0:00:10.047 ******** 2026-04-04 00:41:53.464479 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464485 | orchestrator | 2026-04-04 00:41:53.464492 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:41:53.464499 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.116) 0:00:10.163 ******** 2026-04-04 00:41:53.464505 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:53.464512 | orchestrator | 2026-04-04 00:41:53.464519 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:41:53.464526 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.106) 0:00:10.269 ******** 2026-04-04 00:41:53.464532 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 00:41:53.464539 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:41:53.464546 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:41:53.464553 | orchestrator |  "sdb": { 2026-04-04 00:41:53.464559 | orchestrator |  "osd_lvm_uuid": "7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3" 2026-04-04 00:41:53.464566 | orchestrator |  }, 2026-04-04 00:41:53.464573 | orchestrator |  "sdc": { 2026-04-04 00:41:53.464580 | orchestrator |  "osd_lvm_uuid": "ecc56a61-ea8b-515f-be54-1cf9bb6e81cf" 2026-04-04 00:41:53.464586 | orchestrator |  } 2026-04-04 00:41:53.464593 | orchestrator |  }, 2026-04-04 00:41:53.464601 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:41:53.464609 | orchestrator |  { 2026-04-04 00:41:53.464616 | orchestrator |  "data": "osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3", 2026-04-04 00:41:53.464623 | orchestrator |  "data_vg": "ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3" 2026-04-04 00:41:53.464635 | orchestrator |  }, 2026-04-04 00:41:53.464642 | orchestrator |  { 2026-04-04 00:41:53.464649 | orchestrator |  "data": "osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf", 2026-04-04 00:41:53.464656 | orchestrator |  "data_vg": "ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf" 2026-04-04 00:41:53.464662 | orchestrator |  } 2026-04-04 00:41:53.464669 | orchestrator |  ] 2026-04-04 00:41:53.464675 | orchestrator |  } 2026-04-04 00:41:53.464682 | orchestrator | } 2026-04-04 00:41:53.464688 | orchestrator | 2026-04-04 00:41:53.464694 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:41:53.464700 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.182) 0:00:10.452 ******** 2026-04-04 00:41:53.464707 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:41:53.464712 | orchestrator | 2026-04-04 00:41:53.464719 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:41:53.464725 | orchestrator | 2026-04-04 00:41:53.464732 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:41:53.464739 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:01.841) 0:00:12.293 ******** 2026-04-04 00:41:53.464746 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:41:53.464752 | orchestrator | 2026-04-04 00:41:53.464785 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:41:53.464792 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.226) 0:00:12.520 ******** 2026-04-04 00:41:53.464799 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:41:53.464805 | orchestrator | 2026-04-04 00:41:53.464817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330233 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.209) 0:00:12.729 ******** 2026-04-04 00:42:00.330346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:42:00.330372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:42:00.330391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:42:00.330403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:42:00.330414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:42:00.330426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:42:00.330437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:42:00.330453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:42:00.330465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-04 00:42:00.330476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:42:00.330487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:42:00.330498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:42:00.330509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:42:00.330521 | orchestrator | 2026-04-04 00:42:00.330533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330545 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.314) 0:00:13.044 ******** 2026-04-04 00:42:00.330556 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330568 | orchestrator | 2026-04-04 00:42:00.330579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330590 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.177) 0:00:13.222 ******** 2026-04-04 00:42:00.330627 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330638 | orchestrator | 2026-04-04 00:42:00.330650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330661 | orchestrator | Saturday 04 April 2026 00:41:54 +0000 (0:00:00.160) 0:00:13.382 ******** 2026-04-04 00:42:00.330672 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330682 | orchestrator | 2026-04-04 00:42:00.330693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330704 | orchestrator | Saturday 04 April 2026 00:41:54 +0000 (0:00:00.149) 0:00:13.532 ******** 2026-04-04 00:42:00.330715 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330726 | orchestrator | 2026-04-04 00:42:00.330737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330747 | orchestrator | Saturday 04 April 2026 00:41:54 +0000 (0:00:00.157) 0:00:13.689 ******** 2026-04-04 00:42:00.330802 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330816 | orchestrator | 2026-04-04 00:42:00.330828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330841 | orchestrator | Saturday 04 April 2026 00:41:54 +0000 (0:00:00.163) 0:00:13.853 ******** 2026-04-04 00:42:00.330854 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330867 | orchestrator | 2026-04-04 00:42:00.330878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330888 | orchestrator | Saturday 04 April 2026 00:41:54 +0000 (0:00:00.414) 0:00:14.267 ******** 2026-04-04 00:42:00.330899 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330910 | orchestrator | 2026-04-04 00:42:00.330921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330932 | orchestrator | Saturday 04 April 2026 00:41:55 +0000 (0:00:00.185) 0:00:14.453 ******** 2026-04-04 00:42:00.330942 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.330953 | orchestrator | 2026-04-04 00:42:00.330964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.330975 | orchestrator | Saturday 04 April 2026 00:41:55 +0000 (0:00:00.181) 0:00:14.634 ******** 2026-04-04 00:42:00.330985 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d) 2026-04-04 00:42:00.330998 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d) 2026-04-04 00:42:00.331008 | orchestrator | 2026-04-04 00:42:00.331037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.331048 | orchestrator | Saturday 04 April 2026 00:41:55 +0000 (0:00:00.401) 0:00:15.036 ******** 2026-04-04 00:42:00.331060 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a) 2026-04-04 00:42:00.331071 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a) 2026-04-04 00:42:00.331081 | orchestrator | 2026-04-04 00:42:00.331092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.331103 | orchestrator | Saturday 04 April 2026 00:41:56 +0000 (0:00:00.413) 0:00:15.449 ******** 2026-04-04 00:42:00.331114 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca) 2026-04-04 00:42:00.331125 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca) 2026-04-04 00:42:00.331136 | orchestrator | 2026-04-04 00:42:00.331147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.331175 | orchestrator | Saturday 04 April 2026 00:41:56 +0000 (0:00:00.420) 0:00:15.870 ******** 2026-04-04 00:42:00.331186 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338) 2026-04-04 00:42:00.331197 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338) 2026-04-04 00:42:00.331209 | orchestrator | 2026-04-04 00:42:00.331228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:00.331240 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.414) 0:00:16.284 ******** 2026-04-04 00:42:00.331251 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:42:00.331262 | orchestrator | 2026-04-04 00:42:00.331273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331283 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.323) 0:00:16.608 ******** 2026-04-04 00:42:00.331294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:42:00.331305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:42:00.331316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:42:00.331326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:42:00.331337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:42:00.331348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:42:00.331359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:42:00.331369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:42:00.331380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-04 00:42:00.331391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:42:00.331402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:42:00.331412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:42:00.331423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:42:00.331434 | orchestrator | 2026-04-04 00:42:00.331445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331455 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.371) 0:00:16.980 ******** 2026-04-04 00:42:00.331466 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331477 | orchestrator | 2026-04-04 00:42:00.331488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331499 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.198) 0:00:17.179 ******** 2026-04-04 00:42:00.331509 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331520 | orchestrator | 2026-04-04 00:42:00.331531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331542 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.615) 0:00:17.794 ******** 2026-04-04 00:42:00.331553 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331564 | orchestrator | 2026-04-04 00:42:00.331575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331585 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.213) 0:00:18.007 ******** 2026-04-04 00:42:00.331596 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331607 | orchestrator | 2026-04-04 00:42:00.331618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331629 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.178) 0:00:18.186 ******** 2026-04-04 00:42:00.331640 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331650 | orchestrator | 2026-04-04 00:42:00.331661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331672 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.168) 0:00:18.355 ******** 2026-04-04 00:42:00.331683 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331700 | orchestrator | 2026-04-04 00:42:00.331717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331728 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.178) 0:00:18.533 ******** 2026-04-04 00:42:00.331739 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331750 | orchestrator | 2026-04-04 00:42:00.331823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331835 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.172) 0:00:18.705 ******** 2026-04-04 00:42:00.331846 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:00.331858 | orchestrator | 2026-04-04 00:42:00.331869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331880 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.187) 0:00:18.893 ******** 2026-04-04 00:42:00.331891 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-04 00:42:00.331903 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-04 00:42:00.331914 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-04 00:42:00.331925 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-04 00:42:00.331936 | orchestrator | 2026-04-04 00:42:00.331947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:00.331958 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.601) 0:00:19.494 ******** 2026-04-04 00:42:00.331969 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.764741 | orchestrator | 2026-04-04 00:42:05.764932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:05.764950 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.174) 0:00:19.669 ******** 2026-04-04 00:42:05.764962 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.764973 | orchestrator | 2026-04-04 00:42:05.764983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:05.764994 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.170) 0:00:19.839 ******** 2026-04-04 00:42:05.765004 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765014 | orchestrator | 2026-04-04 00:42:05.765024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:05.765034 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.174) 0:00:20.014 ******** 2026-04-04 00:42:05.765043 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765054 | orchestrator | 2026-04-04 00:42:05.765064 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:42:05.765074 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.174) 0:00:20.189 ******** 2026-04-04 00:42:05.765085 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:42:05.765095 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:42:05.765105 | orchestrator | 2026-04-04 00:42:05.765115 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:42:05.765125 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.277) 0:00:20.466 ******** 2026-04-04 00:42:05.765135 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765146 | orchestrator | 2026-04-04 00:42:05.765156 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:42:05.765166 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.105) 0:00:20.571 ******** 2026-04-04 00:42:05.765176 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765186 | orchestrator | 2026-04-04 00:42:05.765196 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:42:05.765207 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.106) 0:00:20.678 ******** 2026-04-04 00:42:05.765216 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765226 | orchestrator | 2026-04-04 00:42:05.765237 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:42:05.765247 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.111) 0:00:20.790 ******** 2026-04-04 00:42:05.765282 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:05.765293 | orchestrator | 2026-04-04 00:42:05.765303 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:42:05.765314 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.102) 0:00:20.892 ******** 2026-04-04 00:42:05.765324 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b1fc2ad7-1445-5918-af09-c59800dad69a'}}) 2026-04-04 00:42:05.765335 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8b2f720-8689-5378-93a8-1716210ee10b'}}) 2026-04-04 00:42:05.765346 | orchestrator | 2026-04-04 00:42:05.765356 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:42:05.765366 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.146) 0:00:21.039 ******** 2026-04-04 00:42:05.765377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b1fc2ad7-1445-5918-af09-c59800dad69a'}})  2026-04-04 00:42:05.765388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8b2f720-8689-5378-93a8-1716210ee10b'}})  2026-04-04 00:42:05.765398 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765407 | orchestrator | 2026-04-04 00:42:05.765417 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:42:05.765427 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.127) 0:00:21.166 ******** 2026-04-04 00:42:05.765437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b1fc2ad7-1445-5918-af09-c59800dad69a'}})  2026-04-04 00:42:05.765447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8b2f720-8689-5378-93a8-1716210ee10b'}})  2026-04-04 00:42:05.765458 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765468 | orchestrator | 2026-04-04 00:42:05.765478 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:42:05.765488 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.127) 0:00:21.294 ******** 2026-04-04 00:42:05.765498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b1fc2ad7-1445-5918-af09-c59800dad69a'}})  2026-04-04 00:42:05.765508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8b2f720-8689-5378-93a8-1716210ee10b'}})  2026-04-04 00:42:05.765517 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765527 | orchestrator | 2026-04-04 00:42:05.765553 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:42:05.765564 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.119) 0:00:21.414 ******** 2026-04-04 00:42:05.765573 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:05.765583 | orchestrator | 2026-04-04 00:42:05.765592 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:42:05.765603 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.113) 0:00:21.527 ******** 2026-04-04 00:42:05.765613 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:05.765623 | orchestrator | 2026-04-04 00:42:05.765633 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:42:05.765644 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.111) 0:00:21.638 ******** 2026-04-04 00:42:05.765672 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765683 | orchestrator | 2026-04-04 00:42:05.765694 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:42:05.765704 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.106) 0:00:21.745 ******** 2026-04-04 00:42:05.765715 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765725 | orchestrator | 2026-04-04 00:42:05.765735 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:42:05.765747 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.247) 0:00:21.992 ******** 2026-04-04 00:42:05.765784 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765808 | orchestrator | 2026-04-04 00:42:05.765818 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:42:05.765827 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.115) 0:00:22.108 ******** 2026-04-04 00:42:05.765837 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:05.765847 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:42:05.765855 | orchestrator |  "sdb": { 2026-04-04 00:42:05.765866 | orchestrator |  "osd_lvm_uuid": "b1fc2ad7-1445-5918-af09-c59800dad69a" 2026-04-04 00:42:05.765875 | orchestrator |  }, 2026-04-04 00:42:05.765885 | orchestrator |  "sdc": { 2026-04-04 00:42:05.765894 | orchestrator |  "osd_lvm_uuid": "f8b2f720-8689-5378-93a8-1716210ee10b" 2026-04-04 00:42:05.765903 | orchestrator |  } 2026-04-04 00:42:05.765913 | orchestrator |  } 2026-04-04 00:42:05.765923 | orchestrator | } 2026-04-04 00:42:05.765932 | orchestrator | 2026-04-04 00:42:05.765942 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:42:05.765951 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.121) 0:00:22.230 ******** 2026-04-04 00:42:05.765960 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.765969 | orchestrator | 2026-04-04 00:42:05.765978 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:42:05.765988 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.152) 0:00:22.382 ******** 2026-04-04 00:42:05.765996 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.766006 | orchestrator | 2026-04-04 00:42:05.766056 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:42:05.766067 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.103) 0:00:22.486 ******** 2026-04-04 00:42:05.766077 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:05.766086 | orchestrator | 2026-04-04 00:42:05.766096 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:42:05.766105 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.114) 0:00:22.600 ******** 2026-04-04 00:42:05.766114 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 00:42:05.766124 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:42:05.766133 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:42:05.766142 | orchestrator |  "sdb": { 2026-04-04 00:42:05.766151 | orchestrator |  "osd_lvm_uuid": "b1fc2ad7-1445-5918-af09-c59800dad69a" 2026-04-04 00:42:05.766161 | orchestrator |  }, 2026-04-04 00:42:05.766170 | orchestrator |  "sdc": { 2026-04-04 00:42:05.766179 | orchestrator |  "osd_lvm_uuid": "f8b2f720-8689-5378-93a8-1716210ee10b" 2026-04-04 00:42:05.766189 | orchestrator |  } 2026-04-04 00:42:05.766199 | orchestrator |  }, 2026-04-04 00:42:05.766208 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:42:05.766218 | orchestrator |  { 2026-04-04 00:42:05.766228 | orchestrator |  "data": "osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a", 2026-04-04 00:42:05.766238 | orchestrator |  "data_vg": "ceph-b1fc2ad7-1445-5918-af09-c59800dad69a" 2026-04-04 00:42:05.766247 | orchestrator |  }, 2026-04-04 00:42:05.766257 | orchestrator |  { 2026-04-04 00:42:05.766266 | orchestrator |  "data": "osd-block-f8b2f720-8689-5378-93a8-1716210ee10b", 2026-04-04 00:42:05.766275 | orchestrator |  "data_vg": "ceph-f8b2f720-8689-5378-93a8-1716210ee10b" 2026-04-04 00:42:05.766285 | orchestrator |  } 2026-04-04 00:42:05.766294 | orchestrator |  ] 2026-04-04 00:42:05.766304 | orchestrator |  } 2026-04-04 00:42:05.766313 | orchestrator | } 2026-04-04 00:42:05.766322 | orchestrator | 2026-04-04 00:42:05.766332 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:42:05.766341 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.197) 0:00:22.798 ******** 2026-04-04 00:42:05.766350 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:42:05.766359 | orchestrator | 2026-04-04 00:42:05.766377 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:42:05.766387 | orchestrator | 2026-04-04 00:42:05.766397 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:42:05.766407 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:01.034) 0:00:23.833 ******** 2026-04-04 00:42:05.766417 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:42:05.766426 | orchestrator | 2026-04-04 00:42:05.766436 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:42:05.766445 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.492) 0:00:24.326 ******** 2026-04-04 00:42:05.766455 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:05.766465 | orchestrator | 2026-04-04 00:42:05.766475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:05.766485 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.481) 0:00:24.807 ******** 2026-04-04 00:42:05.766496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:42:05.766506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:42:05.766516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:42:05.766527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:42:05.766537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:42:05.766558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:42:12.543087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:42:12.543193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:42:12.543209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-04 00:42:12.543219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:42:12.543247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:42:12.543258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:42:12.543267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:42:12.543277 | orchestrator | 2026-04-04 00:42:12.543288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543299 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.284) 0:00:25.092 ******** 2026-04-04 00:42:12.543308 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543318 | orchestrator | 2026-04-04 00:42:12.543326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543337 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.167) 0:00:25.259 ******** 2026-04-04 00:42:12.543346 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543355 | orchestrator | 2026-04-04 00:42:12.543365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543374 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.152) 0:00:25.412 ******** 2026-04-04 00:42:12.543383 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543393 | orchestrator | 2026-04-04 00:42:12.543401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543411 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.131) 0:00:25.544 ******** 2026-04-04 00:42:12.543424 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543434 | orchestrator | 2026-04-04 00:42:12.543443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543452 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.136) 0:00:25.681 ******** 2026-04-04 00:42:12.543485 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543496 | orchestrator | 2026-04-04 00:42:12.543505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543515 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.154) 0:00:25.835 ******** 2026-04-04 00:42:12.543526 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543535 | orchestrator | 2026-04-04 00:42:12.543545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543555 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.138) 0:00:25.974 ******** 2026-04-04 00:42:12.543564 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543575 | orchestrator | 2026-04-04 00:42:12.543584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543594 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.199) 0:00:26.173 ******** 2026-04-04 00:42:12.543603 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.543612 | orchestrator | 2026-04-04 00:42:12.543621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543630 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.172) 0:00:26.346 ******** 2026-04-04 00:42:12.543639 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999) 2026-04-04 00:42:12.543650 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999) 2026-04-04 00:42:12.543659 | orchestrator | 2026-04-04 00:42:12.543668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543678 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.506) 0:00:26.852 ******** 2026-04-04 00:42:12.543687 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434) 2026-04-04 00:42:12.543698 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434) 2026-04-04 00:42:12.543707 | orchestrator | 2026-04-04 00:42:12.543717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543726 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.645) 0:00:27.498 ******** 2026-04-04 00:42:12.543737 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364) 2026-04-04 00:42:12.543746 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364) 2026-04-04 00:42:12.543781 | orchestrator | 2026-04-04 00:42:12.543791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543800 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.388) 0:00:27.886 ******** 2026-04-04 00:42:12.543810 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0) 2026-04-04 00:42:12.543819 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0) 2026-04-04 00:42:12.543828 | orchestrator | 2026-04-04 00:42:12.543838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.543848 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.397) 0:00:28.284 ******** 2026-04-04 00:42:12.543858 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:42:12.543867 | orchestrator | 2026-04-04 00:42:12.543877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.543904 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.319) 0:00:28.603 ******** 2026-04-04 00:42:12.543914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:42:12.543923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:42:12.543933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:42:12.543943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:42:12.543972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:42:12.543981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:42:12.543990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:42:12.544000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:42:12.544008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-04 00:42:12.544016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:42:12.544025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:42:12.544033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:42:12.544043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:42:12.544052 | orchestrator | 2026-04-04 00:42:12.544061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544071 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.340) 0:00:28.943 ******** 2026-04-04 00:42:12.544080 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544089 | orchestrator | 2026-04-04 00:42:12.544098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544107 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.169) 0:00:29.113 ******** 2026-04-04 00:42:12.544116 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544126 | orchestrator | 2026-04-04 00:42:12.544137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544145 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.188) 0:00:29.302 ******** 2026-04-04 00:42:12.544154 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544164 | orchestrator | 2026-04-04 00:42:12.544174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544191 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.192) 0:00:29.495 ******** 2026-04-04 00:42:12.544200 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544208 | orchestrator | 2026-04-04 00:42:12.544216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544226 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.189) 0:00:29.685 ******** 2026-04-04 00:42:12.544235 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544244 | orchestrator | 2026-04-04 00:42:12.544253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544263 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.189) 0:00:29.874 ******** 2026-04-04 00:42:12.544273 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544282 | orchestrator | 2026-04-04 00:42:12.544291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544301 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.462) 0:00:30.337 ******** 2026-04-04 00:42:12.544310 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544320 | orchestrator | 2026-04-04 00:42:12.544328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544338 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.173) 0:00:30.511 ******** 2026-04-04 00:42:12.544347 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544356 | orchestrator | 2026-04-04 00:42:12.544366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544376 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.152) 0:00:30.663 ******** 2026-04-04 00:42:12.544385 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-04 00:42:12.544403 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-04 00:42:12.544412 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-04 00:42:12.544422 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-04 00:42:12.544431 | orchestrator | 2026-04-04 00:42:12.544440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544450 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.453) 0:00:31.116 ******** 2026-04-04 00:42:12.544459 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544468 | orchestrator | 2026-04-04 00:42:12.544478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544487 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.168) 0:00:31.285 ******** 2026-04-04 00:42:12.544496 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544505 | orchestrator | 2026-04-04 00:42:12.544515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544524 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.175) 0:00:31.461 ******** 2026-04-04 00:42:12.544533 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544542 | orchestrator | 2026-04-04 00:42:12.544552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.544561 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.163) 0:00:31.624 ******** 2026-04-04 00:42:12.544571 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.544580 | orchestrator | 2026-04-04 00:42:12.544598 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:42:16.465964 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.184) 0:00:31.809 ******** 2026-04-04 00:42:16.466151 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:42:16.466168 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:42:16.466180 | orchestrator | 2026-04-04 00:42:16.466192 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:42:16.466203 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.144) 0:00:31.953 ******** 2026-04-04 00:42:16.466214 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466226 | orchestrator | 2026-04-04 00:42:16.466237 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:42:16.466248 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.140) 0:00:32.094 ******** 2026-04-04 00:42:16.466259 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466269 | orchestrator | 2026-04-04 00:42:16.466280 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:42:16.466291 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.105) 0:00:32.200 ******** 2026-04-04 00:42:16.466302 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466313 | orchestrator | 2026-04-04 00:42:16.466325 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:42:16.466336 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.130) 0:00:32.330 ******** 2026-04-04 00:42:16.466347 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:16.466358 | orchestrator | 2026-04-04 00:42:16.466369 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:42:16.466380 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.233) 0:00:32.563 ******** 2026-04-04 00:42:16.466391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a8cb98ca-1bad-517a-917a-7c952ebb91ae'}}) 2026-04-04 00:42:16.466403 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}}) 2026-04-04 00:42:16.466414 | orchestrator | 2026-04-04 00:42:16.466425 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:42:16.466436 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.158) 0:00:32.722 ******** 2026-04-04 00:42:16.466447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a8cb98ca-1bad-517a-917a-7c952ebb91ae'}})  2026-04-04 00:42:16.466487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}})  2026-04-04 00:42:16.466502 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466515 | orchestrator | 2026-04-04 00:42:16.466528 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:42:16.466541 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.147) 0:00:32.869 ******** 2026-04-04 00:42:16.466554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a8cb98ca-1bad-517a-917a-7c952ebb91ae'}})  2026-04-04 00:42:16.466567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}})  2026-04-04 00:42:16.466579 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466592 | orchestrator | 2026-04-04 00:42:16.466605 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:42:16.466631 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.126) 0:00:32.996 ******** 2026-04-04 00:42:16.466646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a8cb98ca-1bad-517a-917a-7c952ebb91ae'}})  2026-04-04 00:42:16.466660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}})  2026-04-04 00:42:16.466673 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466686 | orchestrator | 2026-04-04 00:42:16.466699 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:42:16.466712 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.159) 0:00:33.155 ******** 2026-04-04 00:42:16.466724 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:16.466737 | orchestrator | 2026-04-04 00:42:16.466794 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:42:16.466807 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.134) 0:00:33.289 ******** 2026-04-04 00:42:16.466818 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:16.466829 | orchestrator | 2026-04-04 00:42:16.466840 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:42:16.466851 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.150) 0:00:33.440 ******** 2026-04-04 00:42:16.466861 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466872 | orchestrator | 2026-04-04 00:42:16.466883 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:42:16.466894 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.168) 0:00:33.609 ******** 2026-04-04 00:42:16.466920 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466931 | orchestrator | 2026-04-04 00:42:16.466942 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:42:16.466953 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.140) 0:00:33.749 ******** 2026-04-04 00:42:16.466964 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.466974 | orchestrator | 2026-04-04 00:42:16.466985 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:42:16.466996 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.124) 0:00:33.874 ******** 2026-04-04 00:42:16.467007 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:16.467017 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:42:16.467028 | orchestrator |  "sdb": { 2026-04-04 00:42:16.467057 | orchestrator |  "osd_lvm_uuid": "a8cb98ca-1bad-517a-917a-7c952ebb91ae" 2026-04-04 00:42:16.467069 | orchestrator |  }, 2026-04-04 00:42:16.467080 | orchestrator |  "sdc": { 2026-04-04 00:42:16.467107 | orchestrator |  "osd_lvm_uuid": "0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6" 2026-04-04 00:42:16.467119 | orchestrator |  } 2026-04-04 00:42:16.467130 | orchestrator |  } 2026-04-04 00:42:16.467141 | orchestrator | } 2026-04-04 00:42:16.467152 | orchestrator | 2026-04-04 00:42:16.467172 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:42:16.467183 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.124) 0:00:33.998 ******** 2026-04-04 00:42:16.467194 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.467205 | orchestrator | 2026-04-04 00:42:16.467215 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:42:16.467226 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.116) 0:00:34.114 ******** 2026-04-04 00:42:16.467237 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.467247 | orchestrator | 2026-04-04 00:42:16.467258 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:42:16.467269 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.344) 0:00:34.458 ******** 2026-04-04 00:42:16.467280 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:16.467290 | orchestrator | 2026-04-04 00:42:16.467301 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:42:16.467312 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.138) 0:00:34.597 ******** 2026-04-04 00:42:16.467322 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 00:42:16.467333 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:42:16.467344 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:42:16.467354 | orchestrator |  "sdb": { 2026-04-04 00:42:16.467365 | orchestrator |  "osd_lvm_uuid": "a8cb98ca-1bad-517a-917a-7c952ebb91ae" 2026-04-04 00:42:16.467375 | orchestrator |  }, 2026-04-04 00:42:16.467386 | orchestrator |  "sdc": { 2026-04-04 00:42:16.467402 | orchestrator |  "osd_lvm_uuid": "0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6" 2026-04-04 00:42:16.467413 | orchestrator |  } 2026-04-04 00:42:16.467424 | orchestrator |  }, 2026-04-04 00:42:16.467435 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:42:16.467446 | orchestrator |  { 2026-04-04 00:42:16.467457 | orchestrator |  "data": "osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae", 2026-04-04 00:42:16.467467 | orchestrator |  "data_vg": "ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae" 2026-04-04 00:42:16.467478 | orchestrator |  }, 2026-04-04 00:42:16.467493 | orchestrator |  { 2026-04-04 00:42:16.467504 | orchestrator |  "data": "osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6", 2026-04-04 00:42:16.467515 | orchestrator |  "data_vg": "ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6" 2026-04-04 00:42:16.467526 | orchestrator |  } 2026-04-04 00:42:16.467536 | orchestrator |  ] 2026-04-04 00:42:16.467547 | orchestrator |  } 2026-04-04 00:42:16.467558 | orchestrator | } 2026-04-04 00:42:16.467568 | orchestrator | 2026-04-04 00:42:16.467579 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:42:16.467590 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.200) 0:00:34.798 ******** 2026-04-04 00:42:16.467601 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:42:16.467611 | orchestrator | 2026-04-04 00:42:16.467635 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:42:16.467647 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:42:16.467659 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:42:16.467670 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:42:16.467681 | orchestrator | 2026-04-04 00:42:16.467692 | orchestrator | 2026-04-04 00:42:16.467702 | orchestrator | 2026-04-04 00:42:16.467713 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:42:16.467723 | orchestrator | Saturday 04 April 2026 00:42:16 +0000 (0:00:00.921) 0:00:35.719 ******** 2026-04-04 00:42:16.467741 | orchestrator | =============================================================================== 2026-04-04 00:42:16.467771 | orchestrator | Write configuration file ------------------------------------------------ 3.80s 2026-04-04 00:42:16.467782 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-04 00:42:16.467793 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2026-04-04 00:42:16.467804 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-04-04 00:42:16.467814 | orchestrator | Get initial list of available block devices ----------------------------- 0.88s 2026-04-04 00:42:16.467825 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-04-04 00:42:16.467835 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-04-04 00:42:16.467846 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-04 00:42:16.467857 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-04-04 00:42:16.467868 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.59s 2026-04-04 00:42:16.467878 | orchestrator | Print configuration data ------------------------------------------------ 0.58s 2026-04-04 00:42:16.467889 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2026-04-04 00:42:16.467913 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.53s 2026-04-04 00:42:16.467931 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-04-04 00:42:16.855607 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-04-04 00:42:16.855719 | orchestrator | Set WAL devices config data --------------------------------------------- 0.50s 2026-04-04 00:42:16.855734 | orchestrator | Add known partitions to the list of available block devices ------------- 0.46s 2026-04-04 00:42:16.855746 | orchestrator | Add known links to the list of available block devices ------------------ 0.46s 2026-04-04 00:42:16.855791 | orchestrator | Add known links to the list of available block devices ------------------ 0.46s 2026-04-04 00:42:16.855802 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.46s 2026-04-04 00:42:38.713977 | orchestrator | 2026-04-04 00:42:38 | INFO  | Task 2b4ba0b0-c4f1-4e0c-bd08-db2d5b946705 (sync inventory) is running in background. Output coming soon. 2026-04-04 00:43:08.069936 | orchestrator | 2026-04-04 00:42:40 | INFO  | Starting group_vars file reorganization 2026-04-04 00:43:08.070049 | orchestrator | 2026-04-04 00:42:40 | INFO  | Moved 0 file(s) to their respective directories 2026-04-04 00:43:08.070059 | orchestrator | 2026-04-04 00:42:40 | INFO  | Group_vars file reorganization completed 2026-04-04 00:43:08.070064 | orchestrator | 2026-04-04 00:42:42 | INFO  | Starting variable preparation from inventory 2026-04-04 00:43:08.070069 | orchestrator | 2026-04-04 00:42:45 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-04 00:43:08.070074 | orchestrator | 2026-04-04 00:42:45 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-04 00:43:08.070079 | orchestrator | 2026-04-04 00:42:45 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-04 00:43:08.070083 | orchestrator | 2026-04-04 00:42:45 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-04 00:43:08.070087 | orchestrator | 2026-04-04 00:42:45 | INFO  | Variable preparation completed 2026-04-04 00:43:08.070091 | orchestrator | 2026-04-04 00:42:46 | INFO  | Starting inventory overwrite handling 2026-04-04 00:43:08.070095 | orchestrator | 2026-04-04 00:42:46 | INFO  | Handling group overwrites in 99-overwrite 2026-04-04 00:43:08.070099 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removing group frr:children from 60-generic 2026-04-04 00:43:08.070122 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-04 00:43:08.070127 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-04 00:43:08.070131 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-04 00:43:08.070135 | orchestrator | 2026-04-04 00:42:46 | INFO  | Handling group overwrites in 20-roles 2026-04-04 00:43:08.070139 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-04 00:43:08.070143 | orchestrator | 2026-04-04 00:42:46 | INFO  | Removed 5 group(s) in total 2026-04-04 00:43:08.070147 | orchestrator | 2026-04-04 00:42:46 | INFO  | Inventory overwrite handling completed 2026-04-04 00:43:08.070151 | orchestrator | 2026-04-04 00:42:48 | INFO  | Starting merge of inventory files 2026-04-04 00:43:08.070155 | orchestrator | 2026-04-04 00:42:48 | INFO  | Inventory files merged successfully 2026-04-04 00:43:08.070158 | orchestrator | 2026-04-04 00:42:53 | INFO  | Generating minified hosts file 2026-04-04 00:43:08.070163 | orchestrator | 2026-04-04 00:42:54 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-04 00:43:08.070167 | orchestrator | 2026-04-04 00:42:54 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-04 00:43:08.070182 | orchestrator | 2026-04-04 00:42:56 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-04 00:43:08.070186 | orchestrator | 2026-04-04 00:43:06 | INFO  | Successfully wrote ClusterShell configuration 2026-04-04 00:43:08.070191 | orchestrator | [master a736b0d] 2026-04-04-00-43 2026-04-04 00:43:08.070196 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-04 00:43:08.070201 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-04 00:43:08.070205 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-04 00:43:08.070209 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-04 00:43:09.262639 | orchestrator | 2026-04-04 00:43:09 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-04 00:43:09.316276 | orchestrator | 2026-04-04 00:43:09 | INFO  | Task 9fcac172-8bd7-484f-8459-0e44580eb46c (ceph-create-lvm-devices) was prepared for execution. 2026-04-04 00:43:09.316364 | orchestrator | 2026-04-04 00:43:09 | INFO  | It takes a moment until task 9fcac172-8bd7-484f-8459-0e44580eb46c (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-04 00:43:19.230117 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:43:19.230231 | orchestrator | 2.16.14 2026-04-04 00:43:19.230249 | orchestrator | 2026-04-04 00:43:19.230268 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:43:19.230289 | orchestrator | 2026-04-04 00:43:19.230307 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:43:19.230330 | orchestrator | Saturday 04 April 2026 00:43:12 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-04-04 00:43:19.230349 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:43:19.230370 | orchestrator | 2026-04-04 00:43:19.230389 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:43:19.230408 | orchestrator | Saturday 04 April 2026 00:43:13 +0000 (0:00:00.260) 0:00:00.472 ******** 2026-04-04 00:43:19.230428 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:19.230441 | orchestrator | 2026-04-04 00:43:19.230452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230463 | orchestrator | Saturday 04 April 2026 00:43:13 +0000 (0:00:00.199) 0:00:00.672 ******** 2026-04-04 00:43:19.230501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:43:19.230512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:43:19.230523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:43:19.230534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:43:19.230545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:43:19.230571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:43:19.230583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:43:19.230596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:43:19.230609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-04 00:43:19.230622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:43:19.230636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:43:19.230649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:43:19.230662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:43:19.230674 | orchestrator | 2026-04-04 00:43:19.230688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230701 | orchestrator | Saturday 04 April 2026 00:43:13 +0000 (0:00:00.384) 0:00:01.057 ******** 2026-04-04 00:43:19.230713 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.230726 | orchestrator | 2026-04-04 00:43:19.230803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230819 | orchestrator | Saturday 04 April 2026 00:43:14 +0000 (0:00:00.494) 0:00:01.551 ******** 2026-04-04 00:43:19.230832 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.230845 | orchestrator | 2026-04-04 00:43:19.230859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230870 | orchestrator | Saturday 04 April 2026 00:43:14 +0000 (0:00:00.183) 0:00:01.735 ******** 2026-04-04 00:43:19.230881 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.230891 | orchestrator | 2026-04-04 00:43:19.230902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230913 | orchestrator | Saturday 04 April 2026 00:43:14 +0000 (0:00:00.165) 0:00:01.901 ******** 2026-04-04 00:43:19.230924 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.230935 | orchestrator | 2026-04-04 00:43:19.230946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.230957 | orchestrator | Saturday 04 April 2026 00:43:14 +0000 (0:00:00.160) 0:00:02.061 ******** 2026-04-04 00:43:19.230968 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.230979 | orchestrator | 2026-04-04 00:43:19.230989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231000 | orchestrator | Saturday 04 April 2026 00:43:14 +0000 (0:00:00.163) 0:00:02.225 ******** 2026-04-04 00:43:19.231011 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231022 | orchestrator | 2026-04-04 00:43:19.231033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231044 | orchestrator | Saturday 04 April 2026 00:43:15 +0000 (0:00:00.159) 0:00:02.385 ******** 2026-04-04 00:43:19.231055 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231066 | orchestrator | 2026-04-04 00:43:19.231077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231088 | orchestrator | Saturday 04 April 2026 00:43:15 +0000 (0:00:00.153) 0:00:02.538 ******** 2026-04-04 00:43:19.231099 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231118 | orchestrator | 2026-04-04 00:43:19.231130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231141 | orchestrator | Saturday 04 April 2026 00:43:15 +0000 (0:00:00.185) 0:00:02.724 ******** 2026-04-04 00:43:19.231152 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79) 2026-04-04 00:43:19.231164 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79) 2026-04-04 00:43:19.231175 | orchestrator | 2026-04-04 00:43:19.231186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231217 | orchestrator | Saturday 04 April 2026 00:43:15 +0000 (0:00:00.370) 0:00:03.094 ******** 2026-04-04 00:43:19.231229 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c) 2026-04-04 00:43:19.231240 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c) 2026-04-04 00:43:19.231251 | orchestrator | 2026-04-04 00:43:19.231262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231273 | orchestrator | Saturday 04 April 2026 00:43:16 +0000 (0:00:00.368) 0:00:03.463 ******** 2026-04-04 00:43:19.231284 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10) 2026-04-04 00:43:19.231294 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10) 2026-04-04 00:43:19.231305 | orchestrator | 2026-04-04 00:43:19.231329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231340 | orchestrator | Saturday 04 April 2026 00:43:16 +0000 (0:00:00.505) 0:00:03.968 ******** 2026-04-04 00:43:19.231351 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903) 2026-04-04 00:43:19.231362 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903) 2026-04-04 00:43:19.231372 | orchestrator | 2026-04-04 00:43:19.231383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:19.231394 | orchestrator | Saturday 04 April 2026 00:43:17 +0000 (0:00:00.523) 0:00:04.492 ******** 2026-04-04 00:43:19.231405 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:43:19.231416 | orchestrator | 2026-04-04 00:43:19.231427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231438 | orchestrator | Saturday 04 April 2026 00:43:17 +0000 (0:00:00.509) 0:00:05.001 ******** 2026-04-04 00:43:19.231449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:43:19.231460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:43:19.231471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:43:19.231482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:43:19.231493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:43:19.231504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:43:19.231515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:43:19.231525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:43:19.231536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-04 00:43:19.231547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:43:19.231558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:43:19.231569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:43:19.231587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:43:19.231598 | orchestrator | 2026-04-04 00:43:19.231609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231620 | orchestrator | Saturday 04 April 2026 00:43:17 +0000 (0:00:00.351) 0:00:05.353 ******** 2026-04-04 00:43:19.231630 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231641 | orchestrator | 2026-04-04 00:43:19.231652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231663 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.175) 0:00:05.528 ******** 2026-04-04 00:43:19.231674 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231684 | orchestrator | 2026-04-04 00:43:19.231704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231715 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.151) 0:00:05.680 ******** 2026-04-04 00:43:19.231726 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231737 | orchestrator | 2026-04-04 00:43:19.231775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231787 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.178) 0:00:05.858 ******** 2026-04-04 00:43:19.231798 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231809 | orchestrator | 2026-04-04 00:43:19.231820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231831 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.203) 0:00:06.061 ******** 2026-04-04 00:43:19.231841 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231852 | orchestrator | 2026-04-04 00:43:19.231863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231874 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.162) 0:00:06.224 ******** 2026-04-04 00:43:19.231885 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231896 | orchestrator | 2026-04-04 00:43:19.231907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:19.231918 | orchestrator | Saturday 04 April 2026 00:43:19 +0000 (0:00:00.193) 0:00:06.417 ******** 2026-04-04 00:43:19.231929 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:19.231940 | orchestrator | 2026-04-04 00:43:19.231957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.921613 | orchestrator | Saturday 04 April 2026 00:43:19 +0000 (0:00:00.168) 0:00:06.585 ******** 2026-04-04 00:43:26.921736 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.921815 | orchestrator | 2026-04-04 00:43:26.921828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.921840 | orchestrator | Saturday 04 April 2026 00:43:19 +0000 (0:00:00.166) 0:00:06.752 ******** 2026-04-04 00:43:26.921851 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-04 00:43:26.921862 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-04 00:43:26.921873 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-04 00:43:26.921884 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-04 00:43:26.921895 | orchestrator | 2026-04-04 00:43:26.921907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.921918 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.821) 0:00:07.573 ******** 2026-04-04 00:43:26.921929 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.921939 | orchestrator | 2026-04-04 00:43:26.921950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.921961 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.188) 0:00:07.761 ******** 2026-04-04 00:43:26.921972 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.921983 | orchestrator | 2026-04-04 00:43:26.921993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.922087 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.180) 0:00:07.942 ******** 2026-04-04 00:43:26.922102 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922117 | orchestrator | 2026-04-04 00:43:26.922137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:26.922156 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.191) 0:00:08.133 ******** 2026-04-04 00:43:26.922174 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922202 | orchestrator | 2026-04-04 00:43:26.922241 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:43:26.922263 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.184) 0:00:08.317 ******** 2026-04-04 00:43:26.922282 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922299 | orchestrator | 2026-04-04 00:43:26.922314 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:43:26.922326 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.114) 0:00:08.432 ******** 2026-04-04 00:43:26.922339 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}}) 2026-04-04 00:43:26.922355 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}}) 2026-04-04 00:43:26.922374 | orchestrator | 2026-04-04 00:43:26.922392 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:43:26.922410 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.213) 0:00:08.645 ******** 2026-04-04 00:43:26.922429 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}) 2026-04-04 00:43:26.922450 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}) 2026-04-04 00:43:26.922469 | orchestrator | 2026-04-04 00:43:26.922488 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:43:26.922505 | orchestrator | Saturday 04 April 2026 00:43:23 +0000 (0:00:01.965) 0:00:10.611 ******** 2026-04-04 00:43:26.922517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.922529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.922540 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922550 | orchestrator | 2026-04-04 00:43:26.922561 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:43:26.922572 | orchestrator | Saturday 04 April 2026 00:43:23 +0000 (0:00:00.150) 0:00:10.762 ******** 2026-04-04 00:43:26.922583 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}) 2026-04-04 00:43:26.922594 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}) 2026-04-04 00:43:26.922605 | orchestrator | 2026-04-04 00:43:26.922616 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:43:26.922627 | orchestrator | Saturday 04 April 2026 00:43:24 +0000 (0:00:01.541) 0:00:12.304 ******** 2026-04-04 00:43:26.922637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.922648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.922659 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922670 | orchestrator | 2026-04-04 00:43:26.922681 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:43:26.922703 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:00.159) 0:00:12.463 ******** 2026-04-04 00:43:26.922770 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922790 | orchestrator | 2026-04-04 00:43:26.922806 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:43:26.922823 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:00.147) 0:00:12.611 ******** 2026-04-04 00:43:26.922841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.922857 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.922876 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922893 | orchestrator | 2026-04-04 00:43:26.922911 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:43:26.922929 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:00.329) 0:00:12.940 ******** 2026-04-04 00:43:26.922947 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.922963 | orchestrator | 2026-04-04 00:43:26.922979 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:43:26.922996 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:00.142) 0:00:13.082 ******** 2026-04-04 00:43:26.923012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.923029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.923046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923065 | orchestrator | 2026-04-04 00:43:26.923084 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:43:26.923103 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:00.155) 0:00:13.238 ******** 2026-04-04 00:43:26.923121 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923139 | orchestrator | 2026-04-04 00:43:26.923158 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:43:26.923178 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.137) 0:00:13.375 ******** 2026-04-04 00:43:26.923196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.923215 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.923234 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923252 | orchestrator | 2026-04-04 00:43:26.923270 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:43:26.923288 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.152) 0:00:13.528 ******** 2026-04-04 00:43:26.923308 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:26.923325 | orchestrator | 2026-04-04 00:43:26.923344 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:43:26.923362 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.134) 0:00:13.663 ******** 2026-04-04 00:43:26.923381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.923400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.923418 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923438 | orchestrator | 2026-04-04 00:43:26.923456 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:43:26.923489 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.147) 0:00:13.810 ******** 2026-04-04 00:43:26.923507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.923526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.923544 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923562 | orchestrator | 2026-04-04 00:43:26.923582 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:43:26.923601 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.150) 0:00:13.961 ******** 2026-04-04 00:43:26.923620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:26.923638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:26.923658 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923677 | orchestrator | 2026-04-04 00:43:26.923695 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:43:26.923715 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.170) 0:00:14.131 ******** 2026-04-04 00:43:26.923769 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:26.923792 | orchestrator | 2026-04-04 00:43:26.923812 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:43:26.923847 | orchestrator | Saturday 04 April 2026 00:43:26 +0000 (0:00:00.146) 0:00:14.277 ******** 2026-04-04 00:43:33.244317 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.244428 | orchestrator | 2026-04-04 00:43:33.244445 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:43:33.244458 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:00.141) 0:00:14.419 ******** 2026-04-04 00:43:33.244472 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.244491 | orchestrator | 2026-04-04 00:43:33.244510 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:43:33.244528 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:00.131) 0:00:14.550 ******** 2026-04-04 00:43:33.244546 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:43:33.244564 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:43:33.244583 | orchestrator | } 2026-04-04 00:43:33.244602 | orchestrator | 2026-04-04 00:43:33.244620 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:43:33.244638 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:00.348) 0:00:14.899 ******** 2026-04-04 00:43:33.244649 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:43:33.244661 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:43:33.244672 | orchestrator | } 2026-04-04 00:43:33.244683 | orchestrator | 2026-04-04 00:43:33.244694 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:43:33.244705 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:00.142) 0:00:15.041 ******** 2026-04-04 00:43:33.244716 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:43:33.244727 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:43:33.244769 | orchestrator | } 2026-04-04 00:43:33.244786 | orchestrator | 2026-04-04 00:43:33.244804 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:43:33.244832 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:00.148) 0:00:15.189 ******** 2026-04-04 00:43:33.244854 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:33.244872 | orchestrator | 2026-04-04 00:43:33.244912 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:43:33.244931 | orchestrator | Saturday 04 April 2026 00:43:28 +0000 (0:00:00.701) 0:00:15.891 ******** 2026-04-04 00:43:33.244980 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:33.245002 | orchestrator | 2026-04-04 00:43:33.245022 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:43:33.245042 | orchestrator | Saturday 04 April 2026 00:43:29 +0000 (0:00:00.493) 0:00:16.384 ******** 2026-04-04 00:43:33.245060 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:33.245078 | orchestrator | 2026-04-04 00:43:33.245097 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:43:33.245116 | orchestrator | Saturday 04 April 2026 00:43:29 +0000 (0:00:00.584) 0:00:16.969 ******** 2026-04-04 00:43:33.245135 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:33.245149 | orchestrator | 2026-04-04 00:43:33.245167 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:43:33.245186 | orchestrator | Saturday 04 April 2026 00:43:29 +0000 (0:00:00.168) 0:00:17.137 ******** 2026-04-04 00:43:33.245204 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245223 | orchestrator | 2026-04-04 00:43:33.245241 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:43:33.245260 | orchestrator | Saturday 04 April 2026 00:43:29 +0000 (0:00:00.110) 0:00:17.248 ******** 2026-04-04 00:43:33.245278 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245295 | orchestrator | 2026-04-04 00:43:33.245314 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:43:33.245332 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.134) 0:00:17.382 ******** 2026-04-04 00:43:33.245351 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:43:33.245370 | orchestrator |  "vgs_report": { 2026-04-04 00:43:33.245389 | orchestrator |  "vg": [] 2026-04-04 00:43:33.245408 | orchestrator |  } 2026-04-04 00:43:33.245427 | orchestrator | } 2026-04-04 00:43:33.245445 | orchestrator | 2026-04-04 00:43:33.245464 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:43:33.245484 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.162) 0:00:17.544 ******** 2026-04-04 00:43:33.245502 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245520 | orchestrator | 2026-04-04 00:43:33.245539 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:43:33.245558 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.144) 0:00:17.689 ******** 2026-04-04 00:43:33.245577 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245595 | orchestrator | 2026-04-04 00:43:33.245613 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:43:33.245631 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.132) 0:00:17.822 ******** 2026-04-04 00:43:33.245650 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245668 | orchestrator | 2026-04-04 00:43:33.245687 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:43:33.245705 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.131) 0:00:17.953 ******** 2026-04-04 00:43:33.245723 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245784 | orchestrator | 2026-04-04 00:43:33.245804 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:43:33.245823 | orchestrator | Saturday 04 April 2026 00:43:30 +0000 (0:00:00.295) 0:00:18.249 ******** 2026-04-04 00:43:33.245843 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245861 | orchestrator | 2026-04-04 00:43:33.245880 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:43:33.245898 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.116) 0:00:18.366 ******** 2026-04-04 00:43:33.245918 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.245938 | orchestrator | 2026-04-04 00:43:33.245957 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:43:33.245976 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.129) 0:00:18.495 ******** 2026-04-04 00:43:33.245996 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246103 | orchestrator | 2026-04-04 00:43:33.246119 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:43:33.246131 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.132) 0:00:18.627 ******** 2026-04-04 00:43:33.246163 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246175 | orchestrator | 2026-04-04 00:43:33.246186 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:43:33.246197 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.129) 0:00:18.757 ******** 2026-04-04 00:43:33.246208 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246218 | orchestrator | 2026-04-04 00:43:33.246229 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:43:33.246240 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.134) 0:00:18.892 ******** 2026-04-04 00:43:33.246251 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246261 | orchestrator | 2026-04-04 00:43:33.246272 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:43:33.246283 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.139) 0:00:19.031 ******** 2026-04-04 00:43:33.246294 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246304 | orchestrator | 2026-04-04 00:43:33.246315 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:43:33.246326 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.140) 0:00:19.172 ******** 2026-04-04 00:43:33.246337 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246347 | orchestrator | 2026-04-04 00:43:33.246358 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:43:33.246369 | orchestrator | Saturday 04 April 2026 00:43:31 +0000 (0:00:00.133) 0:00:19.306 ******** 2026-04-04 00:43:33.246379 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246391 | orchestrator | 2026-04-04 00:43:33.246408 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:43:33.246425 | orchestrator | Saturday 04 April 2026 00:43:32 +0000 (0:00:00.134) 0:00:19.440 ******** 2026-04-04 00:43:33.246436 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246447 | orchestrator | 2026-04-04 00:43:33.246467 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:43:33.246478 | orchestrator | Saturday 04 April 2026 00:43:32 +0000 (0:00:00.136) 0:00:19.577 ******** 2026-04-04 00:43:33.246491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:33.246503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:33.246610 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246622 | orchestrator | 2026-04-04 00:43:33.246633 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:43:33.246644 | orchestrator | Saturday 04 April 2026 00:43:32 +0000 (0:00:00.160) 0:00:19.738 ******** 2026-04-04 00:43:33.246656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:33.246667 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:33.246678 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246688 | orchestrator | 2026-04-04 00:43:33.246699 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:43:33.246710 | orchestrator | Saturday 04 April 2026 00:43:32 +0000 (0:00:00.338) 0:00:20.076 ******** 2026-04-04 00:43:33.246721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:33.246732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:33.246836 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246856 | orchestrator | 2026-04-04 00:43:33.246868 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:43:33.246879 | orchestrator | Saturday 04 April 2026 00:43:32 +0000 (0:00:00.159) 0:00:20.236 ******** 2026-04-04 00:43:33.246890 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:33.246901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:33.246912 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246923 | orchestrator | 2026-04-04 00:43:33.246934 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:43:33.246945 | orchestrator | Saturday 04 April 2026 00:43:33 +0000 (0:00:00.144) 0:00:20.380 ******** 2026-04-04 00:43:33.246956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:33.246967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:33.246978 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:33.246988 | orchestrator | 2026-04-04 00:43:33.246999 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:43:33.247010 | orchestrator | Saturday 04 April 2026 00:43:33 +0000 (0:00:00.153) 0:00:20.534 ******** 2026-04-04 00:43:33.247034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.789528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.789638 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.789655 | orchestrator | 2026-04-04 00:43:38.789667 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:43:38.789680 | orchestrator | Saturday 04 April 2026 00:43:33 +0000 (0:00:00.181) 0:00:20.716 ******** 2026-04-04 00:43:38.789691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.789703 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.789714 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.789725 | orchestrator | 2026-04-04 00:43:38.789767 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:43:38.789787 | orchestrator | Saturday 04 April 2026 00:43:33 +0000 (0:00:00.164) 0:00:20.880 ******** 2026-04-04 00:43:38.789807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.789825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.789845 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.789857 | orchestrator | 2026-04-04 00:43:38.789869 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:43:38.789882 | orchestrator | Saturday 04 April 2026 00:43:33 +0000 (0:00:00.139) 0:00:21.019 ******** 2026-04-04 00:43:38.789900 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:38.789920 | orchestrator | 2026-04-04 00:43:38.789970 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:43:38.789990 | orchestrator | Saturday 04 April 2026 00:43:34 +0000 (0:00:00.509) 0:00:21.528 ******** 2026-04-04 00:43:38.790007 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:38.790079 | orchestrator | 2026-04-04 00:43:38.790093 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:43:38.790125 | orchestrator | Saturday 04 April 2026 00:43:34 +0000 (0:00:00.524) 0:00:22.052 ******** 2026-04-04 00:43:38.790138 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:43:38.790151 | orchestrator | 2026-04-04 00:43:38.790163 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:43:38.790177 | orchestrator | Saturday 04 April 2026 00:43:34 +0000 (0:00:00.151) 0:00:22.204 ******** 2026-04-04 00:43:38.790189 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'vg_name': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}) 2026-04-04 00:43:38.790204 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'vg_name': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}) 2026-04-04 00:43:38.790216 | orchestrator | 2026-04-04 00:43:38.790230 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:43:38.790243 | orchestrator | Saturday 04 April 2026 00:43:35 +0000 (0:00:00.224) 0:00:22.428 ******** 2026-04-04 00:43:38.790256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.790269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.790281 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.790294 | orchestrator | 2026-04-04 00:43:38.790306 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:43:38.790319 | orchestrator | Saturday 04 April 2026 00:43:35 +0000 (0:00:00.197) 0:00:22.625 ******** 2026-04-04 00:43:38.790331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.790345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.790358 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.790370 | orchestrator | 2026-04-04 00:43:38.790388 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:43:38.790408 | orchestrator | Saturday 04 April 2026 00:43:35 +0000 (0:00:00.373) 0:00:22.999 ******** 2026-04-04 00:43:38.790427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'})  2026-04-04 00:43:38.790448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'})  2026-04-04 00:43:38.790468 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:43:38.790487 | orchestrator | 2026-04-04 00:43:38.790502 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:43:38.790513 | orchestrator | Saturday 04 April 2026 00:43:35 +0000 (0:00:00.194) 0:00:23.193 ******** 2026-04-04 00:43:38.790543 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:43:38.790554 | orchestrator |  "lvm_report": { 2026-04-04 00:43:38.790565 | orchestrator |  "lv": [ 2026-04-04 00:43:38.790576 | orchestrator |  { 2026-04-04 00:43:38.790587 | orchestrator |  "lv_name": "osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3", 2026-04-04 00:43:38.790598 | orchestrator |  "vg_name": "ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3" 2026-04-04 00:43:38.790609 | orchestrator |  }, 2026-04-04 00:43:38.790630 | orchestrator |  { 2026-04-04 00:43:38.790641 | orchestrator |  "lv_name": "osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf", 2026-04-04 00:43:38.790651 | orchestrator |  "vg_name": "ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf" 2026-04-04 00:43:38.790662 | orchestrator |  } 2026-04-04 00:43:38.790673 | orchestrator |  ], 2026-04-04 00:43:38.790683 | orchestrator |  "pv": [ 2026-04-04 00:43:38.790694 | orchestrator |  { 2026-04-04 00:43:38.790704 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:43:38.790715 | orchestrator |  "vg_name": "ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3" 2026-04-04 00:43:38.790726 | orchestrator |  }, 2026-04-04 00:43:38.790818 | orchestrator |  { 2026-04-04 00:43:38.790832 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:43:38.790843 | orchestrator |  "vg_name": "ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf" 2026-04-04 00:43:38.790854 | orchestrator |  } 2026-04-04 00:43:38.790864 | orchestrator |  ] 2026-04-04 00:43:38.790875 | orchestrator |  } 2026-04-04 00:43:38.790886 | orchestrator | } 2026-04-04 00:43:38.790897 | orchestrator | 2026-04-04 00:43:38.790908 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:43:38.790922 | orchestrator | 2026-04-04 00:43:38.790940 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:43:38.790967 | orchestrator | Saturday 04 April 2026 00:43:36 +0000 (0:00:00.285) 0:00:23.479 ******** 2026-04-04 00:43:38.790987 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:43:38.791006 | orchestrator | 2026-04-04 00:43:38.791024 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:43:38.791045 | orchestrator | Saturday 04 April 2026 00:43:36 +0000 (0:00:00.262) 0:00:23.741 ******** 2026-04-04 00:43:38.791064 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:38.791082 | orchestrator | 2026-04-04 00:43:38.791101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791114 | orchestrator | Saturday 04 April 2026 00:43:36 +0000 (0:00:00.256) 0:00:23.998 ******** 2026-04-04 00:43:38.791124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:43:38.791135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:43:38.791146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:43:38.791157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:43:38.791167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:43:38.791178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:43:38.791188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:43:38.791199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:43:38.791209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-04 00:43:38.791220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:43:38.791231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:43:38.791247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:43:38.791264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:43:38.791280 | orchestrator | 2026-04-04 00:43:38.791297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791314 | orchestrator | Saturday 04 April 2026 00:43:37 +0000 (0:00:00.470) 0:00:24.468 ******** 2026-04-04 00:43:38.791332 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791362 | orchestrator | 2026-04-04 00:43:38.791382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791401 | orchestrator | Saturday 04 April 2026 00:43:37 +0000 (0:00:00.194) 0:00:24.663 ******** 2026-04-04 00:43:38.791419 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791438 | orchestrator | 2026-04-04 00:43:38.791457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791474 | orchestrator | Saturday 04 April 2026 00:43:37 +0000 (0:00:00.190) 0:00:24.853 ******** 2026-04-04 00:43:38.791493 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791511 | orchestrator | 2026-04-04 00:43:38.791529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791547 | orchestrator | Saturday 04 April 2026 00:43:37 +0000 (0:00:00.178) 0:00:25.032 ******** 2026-04-04 00:43:38.791565 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791584 | orchestrator | 2026-04-04 00:43:38.791602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791622 | orchestrator | Saturday 04 April 2026 00:43:38 +0000 (0:00:00.666) 0:00:25.698 ******** 2026-04-04 00:43:38.791641 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791659 | orchestrator | 2026-04-04 00:43:38.791678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:38.791697 | orchestrator | Saturday 04 April 2026 00:43:38 +0000 (0:00:00.228) 0:00:25.926 ******** 2026-04-04 00:43:38.791715 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:38.791730 | orchestrator | 2026-04-04 00:43:38.791786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960716 | orchestrator | Saturday 04 April 2026 00:43:38 +0000 (0:00:00.219) 0:00:26.145 ******** 2026-04-04 00:43:48.960822 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.960835 | orchestrator | 2026-04-04 00:43:48.960842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960847 | orchestrator | Saturday 04 April 2026 00:43:39 +0000 (0:00:00.217) 0:00:26.363 ******** 2026-04-04 00:43:48.960851 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.960856 | orchestrator | 2026-04-04 00:43:48.960861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960865 | orchestrator | Saturday 04 April 2026 00:43:39 +0000 (0:00:00.185) 0:00:26.548 ******** 2026-04-04 00:43:48.960870 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d) 2026-04-04 00:43:48.960876 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d) 2026-04-04 00:43:48.960880 | orchestrator | 2026-04-04 00:43:48.960885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960889 | orchestrator | Saturday 04 April 2026 00:43:39 +0000 (0:00:00.435) 0:00:26.983 ******** 2026-04-04 00:43:48.960894 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a) 2026-04-04 00:43:48.960898 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a) 2026-04-04 00:43:48.960903 | orchestrator | 2026-04-04 00:43:48.960907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960912 | orchestrator | Saturday 04 April 2026 00:43:40 +0000 (0:00:00.470) 0:00:27.453 ******** 2026-04-04 00:43:48.960916 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca) 2026-04-04 00:43:48.960921 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca) 2026-04-04 00:43:48.960925 | orchestrator | 2026-04-04 00:43:48.960930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960934 | orchestrator | Saturday 04 April 2026 00:43:40 +0000 (0:00:00.408) 0:00:27.862 ******** 2026-04-04 00:43:48.960939 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338) 2026-04-04 00:43:48.960961 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338) 2026-04-04 00:43:48.960965 | orchestrator | 2026-04-04 00:43:48.960970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:43:48.960974 | orchestrator | Saturday 04 April 2026 00:43:41 +0000 (0:00:00.595) 0:00:28.458 ******** 2026-04-04 00:43:48.960978 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:43:48.960983 | orchestrator | 2026-04-04 00:43:48.960987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.960991 | orchestrator | Saturday 04 April 2026 00:43:41 +0000 (0:00:00.392) 0:00:28.851 ******** 2026-04-04 00:43:48.960995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:43:48.961000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:43:48.961005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:43:48.961009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:43:48.961013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:43:48.961017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:43:48.961021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:43:48.961026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:43:48.961030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-04 00:43:48.961035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:43:48.961039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:43:48.961043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:43:48.961048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:43:48.961052 | orchestrator | 2026-04-04 00:43:48.961056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961060 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.656) 0:00:29.507 ******** 2026-04-04 00:43:48.961065 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961069 | orchestrator | 2026-04-04 00:43:48.961073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961077 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.197) 0:00:29.705 ******** 2026-04-04 00:43:48.961082 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961086 | orchestrator | 2026-04-04 00:43:48.961090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961095 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.193) 0:00:29.899 ******** 2026-04-04 00:43:48.961099 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961103 | orchestrator | 2026-04-04 00:43:48.961118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961123 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.194) 0:00:30.094 ******** 2026-04-04 00:43:48.961128 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961132 | orchestrator | 2026-04-04 00:43:48.961136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961141 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.180) 0:00:30.275 ******** 2026-04-04 00:43:48.961145 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961149 | orchestrator | 2026-04-04 00:43:48.961153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961162 | orchestrator | Saturday 04 April 2026 00:43:43 +0000 (0:00:00.251) 0:00:30.526 ******** 2026-04-04 00:43:48.961166 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961171 | orchestrator | 2026-04-04 00:43:48.961175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961180 | orchestrator | Saturday 04 April 2026 00:43:43 +0000 (0:00:00.177) 0:00:30.704 ******** 2026-04-04 00:43:48.961184 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961188 | orchestrator | 2026-04-04 00:43:48.961193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961197 | orchestrator | Saturday 04 April 2026 00:43:43 +0000 (0:00:00.161) 0:00:30.866 ******** 2026-04-04 00:43:48.961214 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961218 | orchestrator | 2026-04-04 00:43:48.961223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961230 | orchestrator | Saturday 04 April 2026 00:43:43 +0000 (0:00:00.164) 0:00:31.031 ******** 2026-04-04 00:43:48.961235 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-04 00:43:48.961239 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-04 00:43:48.961244 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-04 00:43:48.961248 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-04 00:43:48.961252 | orchestrator | 2026-04-04 00:43:48.961257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961261 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.797) 0:00:31.828 ******** 2026-04-04 00:43:48.961265 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961270 | orchestrator | 2026-04-04 00:43:48.961274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961278 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.173) 0:00:32.002 ******** 2026-04-04 00:43:48.961282 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961287 | orchestrator | 2026-04-04 00:43:48.961291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961295 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.175) 0:00:32.177 ******** 2026-04-04 00:43:48.961300 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961304 | orchestrator | 2026-04-04 00:43:48.961308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:43:48.961313 | orchestrator | Saturday 04 April 2026 00:43:45 +0000 (0:00:00.504) 0:00:32.682 ******** 2026-04-04 00:43:48.961317 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961321 | orchestrator | 2026-04-04 00:43:48.961325 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:43:48.961330 | orchestrator | Saturday 04 April 2026 00:43:45 +0000 (0:00:00.183) 0:00:32.866 ******** 2026-04-04 00:43:48.961334 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961338 | orchestrator | 2026-04-04 00:43:48.961343 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:43:48.961347 | orchestrator | Saturday 04 April 2026 00:43:45 +0000 (0:00:00.123) 0:00:32.989 ******** 2026-04-04 00:43:48.961351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b1fc2ad7-1445-5918-af09-c59800dad69a'}}) 2026-04-04 00:43:48.961356 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8b2f720-8689-5378-93a8-1716210ee10b'}}) 2026-04-04 00:43:48.961361 | orchestrator | 2026-04-04 00:43:48.961365 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:43:48.961369 | orchestrator | Saturday 04 April 2026 00:43:45 +0000 (0:00:00.172) 0:00:33.161 ******** 2026-04-04 00:43:48.961374 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'}) 2026-04-04 00:43:48.961380 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'}) 2026-04-04 00:43:48.961388 | orchestrator | 2026-04-04 00:43:48.961393 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:43:48.961397 | orchestrator | Saturday 04 April 2026 00:43:47 +0000 (0:00:01.798) 0:00:34.960 ******** 2026-04-04 00:43:48.961402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:48.961407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:48.961411 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:48.961416 | orchestrator | 2026-04-04 00:43:48.961420 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:43:48.961424 | orchestrator | Saturday 04 April 2026 00:43:47 +0000 (0:00:00.118) 0:00:35.078 ******** 2026-04-04 00:43:48.961428 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'}) 2026-04-04 00:43:48.961436 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'}) 2026-04-04 00:43:53.921980 | orchestrator | 2026-04-04 00:43:53.922160 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:43:53.922178 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:01.311) 0:00:36.390 ******** 2026-04-04 00:43:53.922191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922215 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922227 | orchestrator | 2026-04-04 00:43:53.922238 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:43:53.922249 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.143) 0:00:36.534 ******** 2026-04-04 00:43:53.922260 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922270 | orchestrator | 2026-04-04 00:43:53.922281 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:43:53.922292 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.131) 0:00:36.665 ******** 2026-04-04 00:43:53.922320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922343 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922354 | orchestrator | 2026-04-04 00:43:53.922365 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:43:53.922376 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.147) 0:00:36.813 ******** 2026-04-04 00:43:53.922387 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922397 | orchestrator | 2026-04-04 00:43:53.922408 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:43:53.922419 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.135) 0:00:36.948 ******** 2026-04-04 00:43:53.922430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922474 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922486 | orchestrator | 2026-04-04 00:43:53.922497 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:43:53.922508 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.134) 0:00:37.082 ******** 2026-04-04 00:43:53.922518 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922530 | orchestrator | 2026-04-04 00:43:53.922541 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:43:53.922552 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.248) 0:00:37.331 ******** 2026-04-04 00:43:53.922563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922585 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922595 | orchestrator | 2026-04-04 00:43:53.922606 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:43:53.922617 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.133) 0:00:37.464 ******** 2026-04-04 00:43:53.922628 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:53.922640 | orchestrator | 2026-04-04 00:43:53.922651 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:43:53.922661 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.126) 0:00:37.590 ******** 2026-04-04 00:43:53.922672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922694 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922705 | orchestrator | 2026-04-04 00:43:53.922715 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:43:53.922726 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.133) 0:00:37.724 ******** 2026-04-04 00:43:53.922768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922805 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922825 | orchestrator | 2026-04-04 00:43:53.922843 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:43:53.922881 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.137) 0:00:37.862 ******** 2026-04-04 00:43:53.922893 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:53.922904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:53.922915 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922925 | orchestrator | 2026-04-04 00:43:53.922936 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:43:53.922947 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.148) 0:00:38.010 ******** 2026-04-04 00:43:53.922957 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.922968 | orchestrator | 2026-04-04 00:43:53.922978 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:43:53.922989 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.120) 0:00:38.131 ******** 2026-04-04 00:43:53.923009 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923019 | orchestrator | 2026-04-04 00:43:53.923030 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:43:53.923047 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.126) 0:00:38.257 ******** 2026-04-04 00:43:53.923058 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923069 | orchestrator | 2026-04-04 00:43:53.923080 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:43:53.923090 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.129) 0:00:38.387 ******** 2026-04-04 00:43:53.923101 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:43:53.923112 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:43:53.923123 | orchestrator | } 2026-04-04 00:43:53.923134 | orchestrator | 2026-04-04 00:43:53.923145 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:43:53.923155 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.130) 0:00:38.518 ******** 2026-04-04 00:43:53.923166 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:43:53.923177 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:43:53.923187 | orchestrator | } 2026-04-04 00:43:53.923198 | orchestrator | 2026-04-04 00:43:53.923209 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:43:53.923220 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.132) 0:00:38.650 ******** 2026-04-04 00:43:53.923230 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:43:53.923241 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:43:53.923252 | orchestrator | } 2026-04-04 00:43:53.923263 | orchestrator | 2026-04-04 00:43:53.923273 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:43:53.923284 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.120) 0:00:38.771 ******** 2026-04-04 00:43:53.923295 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:53.923306 | orchestrator | 2026-04-04 00:43:53.923316 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:43:53.923327 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.586) 0:00:39.358 ******** 2026-04-04 00:43:53.923338 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:53.923348 | orchestrator | 2026-04-04 00:43:53.923359 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:43:53.923370 | orchestrator | Saturday 04 April 2026 00:43:52 +0000 (0:00:00.491) 0:00:39.849 ******** 2026-04-04 00:43:53.923380 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:53.923391 | orchestrator | 2026-04-04 00:43:53.923402 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:43:53.923412 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.522) 0:00:40.372 ******** 2026-04-04 00:43:53.923423 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:53.923434 | orchestrator | 2026-04-04 00:43:53.923444 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:43:53.923455 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.114) 0:00:40.486 ******** 2026-04-04 00:43:53.923466 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923476 | orchestrator | 2026-04-04 00:43:53.923487 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:43:53.923498 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.092) 0:00:40.579 ******** 2026-04-04 00:43:53.923509 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923519 | orchestrator | 2026-04-04 00:43:53.923530 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:43:53.923541 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.098) 0:00:40.678 ******** 2026-04-04 00:43:53.923552 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:43:53.923563 | orchestrator |  "vgs_report": { 2026-04-04 00:43:53.923574 | orchestrator |  "vg": [] 2026-04-04 00:43:53.923584 | orchestrator |  } 2026-04-04 00:43:53.923596 | orchestrator | } 2026-04-04 00:43:53.923613 | orchestrator | 2026-04-04 00:43:53.923624 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:43:53.923635 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.143) 0:00:40.821 ******** 2026-04-04 00:43:53.923646 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923657 | orchestrator | 2026-04-04 00:43:53.923667 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:43:53.923678 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.112) 0:00:40.934 ******** 2026-04-04 00:43:53.923688 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923699 | orchestrator | 2026-04-04 00:43:53.923710 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:43:53.923721 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.112) 0:00:41.046 ******** 2026-04-04 00:43:53.923756 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923769 | orchestrator | 2026-04-04 00:43:53.923780 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:43:53.923791 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.121) 0:00:41.168 ******** 2026-04-04 00:43:53.923802 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:53.923813 | orchestrator | 2026-04-04 00:43:53.923830 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:43:57.851069 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.111) 0:00:41.279 ******** 2026-04-04 00:43:57.851159 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851169 | orchestrator | 2026-04-04 00:43:57.851178 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:43:57.851185 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.102) 0:00:41.381 ******** 2026-04-04 00:43:57.851192 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851200 | orchestrator | 2026-04-04 00:43:57.851207 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:43:57.851215 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.253) 0:00:41.634 ******** 2026-04-04 00:43:57.851222 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851229 | orchestrator | 2026-04-04 00:43:57.851236 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:43:57.851243 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.110) 0:00:41.744 ******** 2026-04-04 00:43:57.851250 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851257 | orchestrator | 2026-04-04 00:43:57.851263 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:43:57.851270 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.109) 0:00:41.854 ******** 2026-04-04 00:43:57.851277 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851284 | orchestrator | 2026-04-04 00:43:57.851291 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:43:57.851298 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.107) 0:00:41.961 ******** 2026-04-04 00:43:57.851304 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851311 | orchestrator | 2026-04-04 00:43:57.851318 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:43:57.851324 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.104) 0:00:42.066 ******** 2026-04-04 00:43:57.851331 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851338 | orchestrator | 2026-04-04 00:43:57.851361 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:43:57.851369 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.101) 0:00:42.168 ******** 2026-04-04 00:43:57.851376 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851382 | orchestrator | 2026-04-04 00:43:57.851389 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:43:57.851396 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:00.106) 0:00:42.275 ******** 2026-04-04 00:43:57.851402 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851428 | orchestrator | 2026-04-04 00:43:57.851436 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:43:57.851442 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.106) 0:00:42.381 ******** 2026-04-04 00:43:57.851449 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851456 | orchestrator | 2026-04-04 00:43:57.851463 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:43:57.851470 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.110) 0:00:42.492 ******** 2026-04-04 00:43:57.851478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851492 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851498 | orchestrator | 2026-04-04 00:43:57.851504 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:43:57.851511 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.122) 0:00:42.614 ******** 2026-04-04 00:43:57.851517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851530 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851536 | orchestrator | 2026-04-04 00:43:57.851543 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:43:57.851550 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.117) 0:00:42.732 ******** 2026-04-04 00:43:57.851556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851571 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851578 | orchestrator | 2026-04-04 00:43:57.851584 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:43:57.851591 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.122) 0:00:42.854 ******** 2026-04-04 00:43:57.851598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851612 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851619 | orchestrator | 2026-04-04 00:43:57.851640 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:43:57.851647 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.254) 0:00:43.109 ******** 2026-04-04 00:43:57.851654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851668 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851674 | orchestrator | 2026-04-04 00:43:57.851680 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:43:57.851687 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.157) 0:00:43.266 ******** 2026-04-04 00:43:57.851700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851719 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851725 | orchestrator | 2026-04-04 00:43:57.851775 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:43:57.851782 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:00.130) 0:00:43.397 ******** 2026-04-04 00:43:57.851789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851803 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851810 | orchestrator | 2026-04-04 00:43:57.851817 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:43:57.851823 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:00.124) 0:00:43.521 ******** 2026-04-04 00:43:57.851830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.851844 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.851851 | orchestrator | 2026-04-04 00:43:57.851857 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:43:57.851864 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:00.134) 0:00:43.655 ******** 2026-04-04 00:43:57.851871 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:57.851878 | orchestrator | 2026-04-04 00:43:57.851885 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:43:57.851892 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:00.526) 0:00:44.182 ******** 2026-04-04 00:43:57.851899 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:57.851905 | orchestrator | 2026-04-04 00:43:57.851912 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:43:57.851918 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:00.532) 0:00:44.715 ******** 2026-04-04 00:43:57.851924 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:43:57.851931 | orchestrator | 2026-04-04 00:43:57.851938 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:43:57.851945 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:00.144) 0:00:44.859 ******** 2026-04-04 00:43:57.851952 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'vg_name': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'}) 2026-04-04 00:43:57.851960 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'vg_name': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'}) 2026-04-04 00:43:57.851967 | orchestrator | 2026-04-04 00:43:57.851974 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:43:57.851981 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:00.149) 0:00:45.008 ******** 2026-04-04 00:43:57.851987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.851994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:43:57.852001 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:43:57.852014 | orchestrator | 2026-04-04 00:43:57.852021 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:43:57.852027 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:00.132) 0:00:45.141 ******** 2026-04-04 00:43:57.852035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:43:57.852046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:44:03.167243 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:44:03.167361 | orchestrator | 2026-04-04 00:44:03.167375 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:44:03.167392 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:00.138) 0:00:45.279 ******** 2026-04-04 00:44:03.167403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'})  2026-04-04 00:44:03.167414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'})  2026-04-04 00:44:03.167424 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:44:03.167433 | orchestrator | 2026-04-04 00:44:03.167444 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:44:03.167453 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:00.144) 0:00:45.423 ******** 2026-04-04 00:44:03.167463 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:44:03.167472 | orchestrator |  "lvm_report": { 2026-04-04 00:44:03.167482 | orchestrator |  "lv": [ 2026-04-04 00:44:03.167508 | orchestrator |  { 2026-04-04 00:44:03.167518 | orchestrator |  "lv_name": "osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a", 2026-04-04 00:44:03.167527 | orchestrator |  "vg_name": "ceph-b1fc2ad7-1445-5918-af09-c59800dad69a" 2026-04-04 00:44:03.167536 | orchestrator |  }, 2026-04-04 00:44:03.167545 | orchestrator |  { 2026-04-04 00:44:03.167554 | orchestrator |  "lv_name": "osd-block-f8b2f720-8689-5378-93a8-1716210ee10b", 2026-04-04 00:44:03.167563 | orchestrator |  "vg_name": "ceph-f8b2f720-8689-5378-93a8-1716210ee10b" 2026-04-04 00:44:03.167573 | orchestrator |  } 2026-04-04 00:44:03.167582 | orchestrator |  ], 2026-04-04 00:44:03.167591 | orchestrator |  "pv": [ 2026-04-04 00:44:03.167600 | orchestrator |  { 2026-04-04 00:44:03.167609 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:44:03.167618 | orchestrator |  "vg_name": "ceph-b1fc2ad7-1445-5918-af09-c59800dad69a" 2026-04-04 00:44:03.167627 | orchestrator |  }, 2026-04-04 00:44:03.167636 | orchestrator |  { 2026-04-04 00:44:03.167645 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:44:03.167654 | orchestrator |  "vg_name": "ceph-f8b2f720-8689-5378-93a8-1716210ee10b" 2026-04-04 00:44:03.167664 | orchestrator |  } 2026-04-04 00:44:03.167674 | orchestrator |  ] 2026-04-04 00:44:03.167683 | orchestrator |  } 2026-04-04 00:44:03.167692 | orchestrator | } 2026-04-04 00:44:03.167702 | orchestrator | 2026-04-04 00:44:03.167711 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:44:03.167720 | orchestrator | 2026-04-04 00:44:03.167744 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:44:03.167755 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:00.380) 0:00:45.804 ******** 2026-04-04 00:44:03.167762 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:44:03.167767 | orchestrator | 2026-04-04 00:44:03.167773 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:44:03.167778 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:00.217) 0:00:46.022 ******** 2026-04-04 00:44:03.167806 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:03.167817 | orchestrator | 2026-04-04 00:44:03.167825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.167834 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:00.209) 0:00:46.232 ******** 2026-04-04 00:44:03.167843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:44:03.167852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:44:03.167861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:44:03.167874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:44:03.167883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:44:03.167892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:44:03.167901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:44:03.167910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:44:03.167919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-04 00:44:03.167928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:44:03.167936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:44:03.167946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:44:03.167956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:44:03.167965 | orchestrator | 2026-04-04 00:44:03.167974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.167983 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.367) 0:00:46.599 ******** 2026-04-04 00:44:03.167993 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168002 | orchestrator | 2026-04-04 00:44:03.168012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168020 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.195) 0:00:46.795 ******** 2026-04-04 00:44:03.168029 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168038 | orchestrator | 2026-04-04 00:44:03.168047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168072 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.171) 0:00:46.967 ******** 2026-04-04 00:44:03.168082 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168092 | orchestrator | 2026-04-04 00:44:03.168101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168110 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.206) 0:00:47.174 ******** 2026-04-04 00:44:03.168119 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168128 | orchestrator | 2026-04-04 00:44:03.168137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168146 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.182) 0:00:47.357 ******** 2026-04-04 00:44:03.168155 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168164 | orchestrator | 2026-04-04 00:44:03.168173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168182 | orchestrator | Saturday 04 April 2026 00:44:00 +0000 (0:00:00.189) 0:00:47.546 ******** 2026-04-04 00:44:03.168191 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168200 | orchestrator | 2026-04-04 00:44:03.168209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168218 | orchestrator | Saturday 04 April 2026 00:44:00 +0000 (0:00:00.477) 0:00:48.024 ******** 2026-04-04 00:44:03.168227 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168243 | orchestrator | 2026-04-04 00:44:03.168249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168255 | orchestrator | Saturday 04 April 2026 00:44:00 +0000 (0:00:00.177) 0:00:48.202 ******** 2026-04-04 00:44:03.168261 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:03.168266 | orchestrator | 2026-04-04 00:44:03.168272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168277 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:00.176) 0:00:48.378 ******** 2026-04-04 00:44:03.168283 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999) 2026-04-04 00:44:03.168289 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999) 2026-04-04 00:44:03.168295 | orchestrator | 2026-04-04 00:44:03.168300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168306 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:00.373) 0:00:48.751 ******** 2026-04-04 00:44:03.168311 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434) 2026-04-04 00:44:03.168317 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434) 2026-04-04 00:44:03.168322 | orchestrator | 2026-04-04 00:44:03.168328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168333 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:00.374) 0:00:49.126 ******** 2026-04-04 00:44:03.168339 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364) 2026-04-04 00:44:03.168348 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364) 2026-04-04 00:44:03.168357 | orchestrator | 2026-04-04 00:44:03.168366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168374 | orchestrator | Saturday 04 April 2026 00:44:02 +0000 (0:00:00.390) 0:00:49.517 ******** 2026-04-04 00:44:03.168383 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0) 2026-04-04 00:44:03.168393 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0) 2026-04-04 00:44:03.168400 | orchestrator | 2026-04-04 00:44:03.168406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:44:03.168411 | orchestrator | Saturday 04 April 2026 00:44:02 +0000 (0:00:00.391) 0:00:49.908 ******** 2026-04-04 00:44:03.168417 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:44:03.168422 | orchestrator | 2026-04-04 00:44:03.168428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:03.168434 | orchestrator | Saturday 04 April 2026 00:44:02 +0000 (0:00:00.305) 0:00:50.214 ******** 2026-04-04 00:44:03.168439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:44:03.168445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:44:03.168450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:44:03.168456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:44:03.168461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:44:03.168467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:44:03.168501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:44:03.168508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:44:03.168513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-04 00:44:03.168524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:44:03.168533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:44:03.168548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:44:11.132024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:44:11.132132 | orchestrator | 2026-04-04 00:44:11.132149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132162 | orchestrator | Saturday 04 April 2026 00:44:03 +0000 (0:00:00.406) 0:00:50.620 ******** 2026-04-04 00:44:11.132173 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132185 | orchestrator | 2026-04-04 00:44:11.132196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132208 | orchestrator | Saturday 04 April 2026 00:44:03 +0000 (0:00:00.180) 0:00:50.801 ******** 2026-04-04 00:44:11.132219 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132230 | orchestrator | 2026-04-04 00:44:11.132241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132252 | orchestrator | Saturday 04 April 2026 00:44:03 +0000 (0:00:00.178) 0:00:50.980 ******** 2026-04-04 00:44:11.132263 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132274 | orchestrator | 2026-04-04 00:44:11.132285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132314 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.469) 0:00:51.449 ******** 2026-04-04 00:44:11.132326 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132337 | orchestrator | 2026-04-04 00:44:11.132349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132368 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.159) 0:00:51.608 ******** 2026-04-04 00:44:11.132385 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132403 | orchestrator | 2026-04-04 00:44:11.132420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132436 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.181) 0:00:51.790 ******** 2026-04-04 00:44:11.132453 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132471 | orchestrator | 2026-04-04 00:44:11.132489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132507 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.172) 0:00:51.963 ******** 2026-04-04 00:44:11.132525 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132546 | orchestrator | 2026-04-04 00:44:11.132566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132584 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.174) 0:00:52.137 ******** 2026-04-04 00:44:11.132602 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132622 | orchestrator | 2026-04-04 00:44:11.132642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132661 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.173) 0:00:52.310 ******** 2026-04-04 00:44:11.132682 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-04 00:44:11.132704 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-04 00:44:11.132724 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-04 00:44:11.132767 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-04 00:44:11.132782 | orchestrator | 2026-04-04 00:44:11.132794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132807 | orchestrator | Saturday 04 April 2026 00:44:05 +0000 (0:00:00.601) 0:00:52.912 ******** 2026-04-04 00:44:11.132820 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132833 | orchestrator | 2026-04-04 00:44:11.132846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132886 | orchestrator | Saturday 04 April 2026 00:44:05 +0000 (0:00:00.180) 0:00:53.092 ******** 2026-04-04 00:44:11.132899 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132912 | orchestrator | 2026-04-04 00:44:11.132924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132935 | orchestrator | Saturday 04 April 2026 00:44:05 +0000 (0:00:00.180) 0:00:53.273 ******** 2026-04-04 00:44:11.132945 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132956 | orchestrator | 2026-04-04 00:44:11.132967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:44:11.132978 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.168) 0:00:53.442 ******** 2026-04-04 00:44:11.132989 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.132999 | orchestrator | 2026-04-04 00:44:11.133011 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:44:11.133021 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.172) 0:00:53.614 ******** 2026-04-04 00:44:11.133032 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133043 | orchestrator | 2026-04-04 00:44:11.133054 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:44:11.133065 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.120) 0:00:53.735 ******** 2026-04-04 00:44:11.133075 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a8cb98ca-1bad-517a-917a-7c952ebb91ae'}}) 2026-04-04 00:44:11.133087 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}}) 2026-04-04 00:44:11.133098 | orchestrator | 2026-04-04 00:44:11.133109 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:44:11.133121 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.297) 0:00:54.032 ******** 2026-04-04 00:44:11.133132 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'}) 2026-04-04 00:44:11.133145 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}) 2026-04-04 00:44:11.133156 | orchestrator | 2026-04-04 00:44:11.133167 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:44:11.133197 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:01.882) 0:00:55.914 ******** 2026-04-04 00:44:11.133209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:11.133222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:11.133233 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133244 | orchestrator | 2026-04-04 00:44:11.133254 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:44:11.133265 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:00.135) 0:00:56.050 ******** 2026-04-04 00:44:11.133276 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'}) 2026-04-04 00:44:11.133295 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}) 2026-04-04 00:44:11.133307 | orchestrator | 2026-04-04 00:44:11.133318 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:44:11.133328 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:01.369) 0:00:57.420 ******** 2026-04-04 00:44:11.133339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:11.133358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:11.133369 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133380 | orchestrator | 2026-04-04 00:44:11.133391 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:44:11.133402 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.121) 0:00:57.541 ******** 2026-04-04 00:44:11.133412 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133423 | orchestrator | 2026-04-04 00:44:11.133434 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:44:11.133445 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.114) 0:00:57.656 ******** 2026-04-04 00:44:11.133456 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:11.133467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:11.133477 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133488 | orchestrator | 2026-04-04 00:44:11.133499 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:44:11.133510 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.132) 0:00:57.788 ******** 2026-04-04 00:44:11.133521 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133531 | orchestrator | 2026-04-04 00:44:11.133542 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:44:11.133553 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.106) 0:00:57.894 ******** 2026-04-04 00:44:11.133564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:11.133574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:11.133585 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133596 | orchestrator | 2026-04-04 00:44:11.133607 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:44:11.133617 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.134) 0:00:58.029 ******** 2026-04-04 00:44:11.133628 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133639 | orchestrator | 2026-04-04 00:44:11.133650 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:44:11.133660 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.131) 0:00:58.160 ******** 2026-04-04 00:44:11.133671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:11.133682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:11.133693 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:11.133704 | orchestrator | 2026-04-04 00:44:11.133715 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:44:11.133754 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.146) 0:00:58.306 ******** 2026-04-04 00:44:11.133766 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:11.133777 | orchestrator | 2026-04-04 00:44:11.133788 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:44:11.133799 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.133) 0:00:58.440 ******** 2026-04-04 00:44:11.133817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:16.571679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:16.571814 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571824 | orchestrator | 2026-04-04 00:44:16.571832 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:44:16.571840 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.242) 0:00:58.683 ******** 2026-04-04 00:44:16.571847 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:16.571853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:16.571860 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571866 | orchestrator | 2026-04-04 00:44:16.571888 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:44:16.571895 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.136) 0:00:58.820 ******** 2026-04-04 00:44:16.571902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:16.571908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:16.571915 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571920 | orchestrator | 2026-04-04 00:44:16.571924 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:44:16.571928 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.139) 0:00:58.959 ******** 2026-04-04 00:44:16.571932 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571936 | orchestrator | 2026-04-04 00:44:16.571940 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:44:16.571944 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.120) 0:00:59.080 ******** 2026-04-04 00:44:16.571948 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571952 | orchestrator | 2026-04-04 00:44:16.571955 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:44:16.571959 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.130) 0:00:59.210 ******** 2026-04-04 00:44:16.571963 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.571968 | orchestrator | 2026-04-04 00:44:16.571972 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:44:16.571976 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.125) 0:00:59.336 ******** 2026-04-04 00:44:16.571980 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:44:16.571984 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:44:16.571988 | orchestrator | } 2026-04-04 00:44:16.571992 | orchestrator | 2026-04-04 00:44:16.571996 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:44:16.572000 | orchestrator | Saturday 04 April 2026 00:44:12 +0000 (0:00:00.120) 0:00:59.457 ******** 2026-04-04 00:44:16.572004 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:44:16.572007 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:44:16.572011 | orchestrator | } 2026-04-04 00:44:16.572015 | orchestrator | 2026-04-04 00:44:16.572019 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:44:16.572023 | orchestrator | Saturday 04 April 2026 00:44:12 +0000 (0:00:00.129) 0:00:59.586 ******** 2026-04-04 00:44:16.572026 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:44:16.572030 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:44:16.572034 | orchestrator | } 2026-04-04 00:44:16.572038 | orchestrator | 2026-04-04 00:44:16.572042 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:44:16.572046 | orchestrator | Saturday 04 April 2026 00:44:12 +0000 (0:00:00.124) 0:00:59.710 ******** 2026-04-04 00:44:16.572062 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:16.572066 | orchestrator | 2026-04-04 00:44:16.572070 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:44:16.572074 | orchestrator | Saturday 04 April 2026 00:44:12 +0000 (0:00:00.528) 0:01:00.239 ******** 2026-04-04 00:44:16.572077 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:16.572081 | orchestrator | 2026-04-04 00:44:16.572085 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:44:16.572089 | orchestrator | Saturday 04 April 2026 00:44:13 +0000 (0:00:00.495) 0:01:00.734 ******** 2026-04-04 00:44:16.572092 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:16.572096 | orchestrator | 2026-04-04 00:44:16.572100 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:44:16.572104 | orchestrator | Saturday 04 April 2026 00:44:13 +0000 (0:00:00.521) 0:01:01.256 ******** 2026-04-04 00:44:16.572107 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:16.572111 | orchestrator | 2026-04-04 00:44:16.572115 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:44:16.572119 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.242) 0:01:01.498 ******** 2026-04-04 00:44:16.572123 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572126 | orchestrator | 2026-04-04 00:44:16.572130 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:44:16.572134 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.091) 0:01:01.590 ******** 2026-04-04 00:44:16.572138 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572141 | orchestrator | 2026-04-04 00:44:16.572145 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:44:16.572149 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.104) 0:01:01.694 ******** 2026-04-04 00:44:16.572153 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:44:16.572157 | orchestrator |  "vgs_report": { 2026-04-04 00:44:16.572161 | orchestrator |  "vg": [] 2026-04-04 00:44:16.572176 | orchestrator |  } 2026-04-04 00:44:16.572181 | orchestrator | } 2026-04-04 00:44:16.572184 | orchestrator | 2026-04-04 00:44:16.572188 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:44:16.572192 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.119) 0:01:01.814 ******** 2026-04-04 00:44:16.572196 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572200 | orchestrator | 2026-04-04 00:44:16.572204 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:44:16.572207 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.101) 0:01:01.915 ******** 2026-04-04 00:44:16.572211 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572215 | orchestrator | 2026-04-04 00:44:16.572219 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:44:16.572223 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.118) 0:01:02.034 ******** 2026-04-04 00:44:16.572226 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572230 | orchestrator | 2026-04-04 00:44:16.572234 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:44:16.572238 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.124) 0:01:02.158 ******** 2026-04-04 00:44:16.572242 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572246 | orchestrator | 2026-04-04 00:44:16.572251 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:44:16.572256 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.121) 0:01:02.279 ******** 2026-04-04 00:44:16.572263 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572270 | orchestrator | 2026-04-04 00:44:16.572276 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:44:16.572281 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.128) 0:01:02.408 ******** 2026-04-04 00:44:16.572287 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572300 | orchestrator | 2026-04-04 00:44:16.572307 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:44:16.572313 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.121) 0:01:02.530 ******** 2026-04-04 00:44:16.572319 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572326 | orchestrator | 2026-04-04 00:44:16.572332 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:44:16.572339 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.127) 0:01:02.658 ******** 2026-04-04 00:44:16.572345 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572352 | orchestrator | 2026-04-04 00:44:16.572358 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:44:16.572365 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.122) 0:01:02.781 ******** 2026-04-04 00:44:16.572371 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572378 | orchestrator | 2026-04-04 00:44:16.572385 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:44:16.572392 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.233) 0:01:03.014 ******** 2026-04-04 00:44:16.572398 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572405 | orchestrator | 2026-04-04 00:44:16.572411 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:44:16.572418 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.120) 0:01:03.135 ******** 2026-04-04 00:44:16.572434 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572441 | orchestrator | 2026-04-04 00:44:16.572447 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:44:16.572453 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.119) 0:01:03.254 ******** 2026-04-04 00:44:16.572459 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572465 | orchestrator | 2026-04-04 00:44:16.572472 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:44:16.572478 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.110) 0:01:03.364 ******** 2026-04-04 00:44:16.572484 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572490 | orchestrator | 2026-04-04 00:44:16.572496 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:44:16.572503 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.111) 0:01:03.475 ******** 2026-04-04 00:44:16.572510 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572517 | orchestrator | 2026-04-04 00:44:16.572524 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:44:16.572531 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.128) 0:01:03.604 ******** 2026-04-04 00:44:16.572539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:16.572546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:16.572553 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572560 | orchestrator | 2026-04-04 00:44:16.572567 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:44:16.572573 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.138) 0:01:03.742 ******** 2026-04-04 00:44:16.572589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:16.572597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:16.572604 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:16.572611 | orchestrator | 2026-04-04 00:44:16.572618 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:44:16.572631 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.130) 0:01:03.873 ******** 2026-04-04 00:44:16.572646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353669 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353677 | orchestrator | 2026-04-04 00:44:19.353685 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:44:19.353693 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.131) 0:01:04.005 ******** 2026-04-04 00:44:19.353699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353806 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353811 | orchestrator | 2026-04-04 00:44:19.353815 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:44:19.353819 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.150) 0:01:04.156 ******** 2026-04-04 00:44:19.353823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353831 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353836 | orchestrator | 2026-04-04 00:44:19.353840 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:44:19.353844 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:00.136) 0:01:04.293 ******** 2026-04-04 00:44:19.353848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353856 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353859 | orchestrator | 2026-04-04 00:44:19.353863 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:44:19.353867 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:00.131) 0:01:04.424 ******** 2026-04-04 00:44:19.353871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353878 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353882 | orchestrator | 2026-04-04 00:44:19.353886 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:44:19.353890 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:00.277) 0:01:04.702 ******** 2026-04-04 00:44:19.353894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.353897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.353901 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.353920 | orchestrator | 2026-04-04 00:44:19.353924 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:44:19.353928 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:00.134) 0:01:04.836 ******** 2026-04-04 00:44:19.353932 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:19.353936 | orchestrator | 2026-04-04 00:44:19.353940 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:44:19.353944 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:00.495) 0:01:05.331 ******** 2026-04-04 00:44:19.353948 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:19.353951 | orchestrator | 2026-04-04 00:44:19.353955 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:44:19.353959 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:00.530) 0:01:05.862 ******** 2026-04-04 00:44:19.353963 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:19.353966 | orchestrator | 2026-04-04 00:44:19.353970 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:44:19.353974 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:00.128) 0:01:05.991 ******** 2026-04-04 00:44:19.353979 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'vg_name': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}) 2026-04-04 00:44:19.353984 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'vg_name': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'}) 2026-04-04 00:44:19.353988 | orchestrator | 2026-04-04 00:44:19.353991 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:44:19.353995 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:00.164) 0:01:06.155 ******** 2026-04-04 00:44:19.354011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.354099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.354103 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.354107 | orchestrator | 2026-04-04 00:44:19.354111 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:44:19.354115 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:00.153) 0:01:06.309 ******** 2026-04-04 00:44:19.354123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.354127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.354130 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.354134 | orchestrator | 2026-04-04 00:44:19.354138 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:44:19.354142 | orchestrator | Saturday 04 April 2026 00:44:19 +0000 (0:00:00.134) 0:01:06.444 ******** 2026-04-04 00:44:19.354146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'})  2026-04-04 00:44:19.354151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'})  2026-04-04 00:44:19.354157 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:19.354163 | orchestrator | 2026-04-04 00:44:19.354167 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:44:19.354172 | orchestrator | Saturday 04 April 2026 00:44:19 +0000 (0:00:00.131) 0:01:06.575 ******** 2026-04-04 00:44:19.354177 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:44:19.354181 | orchestrator |  "lvm_report": { 2026-04-04 00:44:19.354186 | orchestrator |  "lv": [ 2026-04-04 00:44:19.354195 | orchestrator |  { 2026-04-04 00:44:19.354200 | orchestrator |  "lv_name": "osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6", 2026-04-04 00:44:19.354205 | orchestrator |  "vg_name": "ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6" 2026-04-04 00:44:19.354210 | orchestrator |  }, 2026-04-04 00:44:19.354214 | orchestrator |  { 2026-04-04 00:44:19.354218 | orchestrator |  "lv_name": "osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae", 2026-04-04 00:44:19.354223 | orchestrator |  "vg_name": "ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae" 2026-04-04 00:44:19.354227 | orchestrator |  } 2026-04-04 00:44:19.354231 | orchestrator |  ], 2026-04-04 00:44:19.354236 | orchestrator |  "pv": [ 2026-04-04 00:44:19.354240 | orchestrator |  { 2026-04-04 00:44:19.354245 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:44:19.354249 | orchestrator |  "vg_name": "ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae" 2026-04-04 00:44:19.354254 | orchestrator |  }, 2026-04-04 00:44:19.354258 | orchestrator |  { 2026-04-04 00:44:19.354262 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:44:19.354267 | orchestrator |  "vg_name": "ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6" 2026-04-04 00:44:19.354271 | orchestrator |  } 2026-04-04 00:44:19.354276 | orchestrator |  ] 2026-04-04 00:44:19.354280 | orchestrator |  } 2026-04-04 00:44:19.354284 | orchestrator | } 2026-04-04 00:44:19.354289 | orchestrator | 2026-04-04 00:44:19.354293 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:44:19.354298 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:44:19.354303 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:44:19.354307 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:44:19.354311 | orchestrator | 2026-04-04 00:44:19.354316 | orchestrator | 2026-04-04 00:44:19.354320 | orchestrator | 2026-04-04 00:44:19.354325 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:44:19.354329 | orchestrator | Saturday 04 April 2026 00:44:19 +0000 (0:00:00.127) 0:01:06.702 ******** 2026-04-04 00:44:19.354334 | orchestrator | =============================================================================== 2026-04-04 00:44:19.354338 | orchestrator | Create block VGs -------------------------------------------------------- 5.65s 2026-04-04 00:44:19.354343 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2026-04-04 00:44:19.354347 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-04-04 00:44:19.354351 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2026-04-04 00:44:19.354355 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2026-04-04 00:44:19.354360 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2026-04-04 00:44:19.354364 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.48s 2026-04-04 00:44:19.354368 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2026-04-04 00:44:19.354377 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-04-04 00:44:19.582259 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-04-04 00:44:19.582350 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-04-04 00:44:19.582361 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-04-04 00:44:19.582368 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-04-04 00:44:19.582375 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.68s 2026-04-04 00:44:19.582406 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-04-04 00:44:19.582412 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-04-04 00:44:19.582430 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.65s 2026-04-04 00:44:19.582437 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.61s 2026-04-04 00:44:19.582444 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-04-04 00:44:19.582450 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.60s 2026-04-04 00:44:30.913376 | orchestrator | 2026-04-04 00:44:30 | INFO  | Prepare task for execution of facts. 2026-04-04 00:44:30.982215 | orchestrator | 2026-04-04 00:44:30 | INFO  | Task e4964799-72f2-4af3-901d-fa07f4dc79f7 (facts) was prepared for execution. 2026-04-04 00:44:30.982321 | orchestrator | 2026-04-04 00:44:30 | INFO  | It takes a moment until task e4964799-72f2-4af3-901d-fa07f4dc79f7 (facts) has been started and output is visible here. 2026-04-04 00:44:42.510867 | orchestrator | 2026-04-04 00:44:42.510957 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 00:44:42.510969 | orchestrator | 2026-04-04 00:44:42.510976 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:44:42.510982 | orchestrator | Saturday 04 April 2026 00:44:33 +0000 (0:00:00.295) 0:00:00.295 ******** 2026-04-04 00:44:42.510989 | orchestrator | ok: [testbed-manager] 2026-04-04 00:44:42.510996 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:44:42.511002 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:44:42.511009 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:44:42.511015 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:44:42.511022 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:44:42.511028 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:42.511034 | orchestrator | 2026-04-04 00:44:42.511041 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:44:42.511047 | orchestrator | Saturday 04 April 2026 00:44:35 +0000 (0:00:01.193) 0:00:01.489 ******** 2026-04-04 00:44:42.511053 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:44:42.511062 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:44:42.511068 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:44:42.511074 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:44:42.511080 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:44:42.511086 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:44:42.511092 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:42.511098 | orchestrator | 2026-04-04 00:44:42.511104 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:44:42.511111 | orchestrator | 2026-04-04 00:44:42.511117 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:44:42.511124 | orchestrator | Saturday 04 April 2026 00:44:36 +0000 (0:00:01.060) 0:00:02.550 ******** 2026-04-04 00:44:42.511131 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:44:42.511137 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:44:42.511144 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:44:42.511151 | orchestrator | ok: [testbed-manager] 2026-04-04 00:44:42.511158 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:44:42.511165 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:44:42.511172 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:44:42.511178 | orchestrator | 2026-04-04 00:44:42.511185 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:44:42.511192 | orchestrator | 2026-04-04 00:44:42.511199 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:44:42.511206 | orchestrator | Saturday 04 April 2026 00:44:41 +0000 (0:00:05.666) 0:00:08.217 ******** 2026-04-04 00:44:42.511214 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:44:42.511221 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:44:42.511254 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:44:42.511262 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:44:42.511268 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:44:42.511275 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:44:42.511282 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:44:42.511289 | orchestrator | 2026-04-04 00:44:42.511296 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:44:42.511303 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511312 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511318 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511325 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511332 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511339 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511346 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:44:42.511353 | orchestrator | 2026-04-04 00:44:42.511360 | orchestrator | 2026-04-04 00:44:42.511367 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:44:42.511374 | orchestrator | Saturday 04 April 2026 00:44:42 +0000 (0:00:00.456) 0:00:08.673 ******** 2026-04-04 00:44:42.511381 | orchestrator | =============================================================================== 2026-04-04 00:44:42.511387 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2026-04-04 00:44:42.511394 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-04-04 00:44:42.511414 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-04-04 00:44:42.511421 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-04-04 00:44:53.948198 | orchestrator | 2026-04-04 00:44:53 | INFO  | Prepare task for execution of frr. 2026-04-04 00:44:54.012822 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task 6de3d982-2f74-4b9a-82e8-996796f3f3ba (frr) was prepared for execution. 2026-04-04 00:44:54.012911 | orchestrator | 2026-04-04 00:44:54 | INFO  | It takes a moment until task 6de3d982-2f74-4b9a-82e8-996796f3f3ba (frr) has been started and output is visible here. 2026-04-04 00:45:17.358272 | orchestrator | 2026-04-04 00:45:17.358352 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-04 00:45:17.358359 | orchestrator | 2026-04-04 00:45:17.358364 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-04 00:45:17.358368 | orchestrator | Saturday 04 April 2026 00:44:56 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-04-04 00:45:17.358373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:45:17.358378 | orchestrator | 2026-04-04 00:45:17.358382 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-04 00:45:17.358386 | orchestrator | Saturday 04 April 2026 00:44:57 +0000 (0:00:00.201) 0:00:00.470 ******** 2026-04-04 00:45:17.358390 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:17.358395 | orchestrator | 2026-04-04 00:45:17.358399 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-04 00:45:17.358421 | orchestrator | Saturday 04 April 2026 00:44:58 +0000 (0:00:01.393) 0:00:01.863 ******** 2026-04-04 00:45:17.358425 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:17.358429 | orchestrator | 2026-04-04 00:45:17.358433 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-04 00:45:17.358437 | orchestrator | Saturday 04 April 2026 00:45:07 +0000 (0:00:09.065) 0:00:10.929 ******** 2026-04-04 00:45:17.358441 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:17.358445 | orchestrator | 2026-04-04 00:45:17.358450 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-04 00:45:17.358454 | orchestrator | Saturday 04 April 2026 00:45:08 +0000 (0:00:00.998) 0:00:11.928 ******** 2026-04-04 00:45:17.358457 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:17.358461 | orchestrator | 2026-04-04 00:45:17.358465 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-04 00:45:17.358469 | orchestrator | Saturday 04 April 2026 00:45:09 +0000 (0:00:00.894) 0:00:12.822 ******** 2026-04-04 00:45:17.358472 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:17.358476 | orchestrator | 2026-04-04 00:45:17.358480 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-04 00:45:17.358484 | orchestrator | Saturday 04 April 2026 00:45:10 +0000 (0:00:01.156) 0:00:13.979 ******** 2026-04-04 00:45:17.358488 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:17.358492 | orchestrator | 2026-04-04 00:45:17.358495 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-04 00:45:17.358499 | orchestrator | Saturday 04 April 2026 00:45:10 +0000 (0:00:00.151) 0:00:14.131 ******** 2026-04-04 00:45:17.358503 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:17.358506 | orchestrator | 2026-04-04 00:45:17.358510 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-04 00:45:17.358514 | orchestrator | Saturday 04 April 2026 00:45:11 +0000 (0:00:00.292) 0:00:14.424 ******** 2026-04-04 00:45:17.358518 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:17.358521 | orchestrator | 2026-04-04 00:45:17.358525 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-04 00:45:17.358530 | orchestrator | Saturday 04 April 2026 00:45:11 +0000 (0:00:00.153) 0:00:14.577 ******** 2026-04-04 00:45:17.358533 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:17.358537 | orchestrator | 2026-04-04 00:45:17.358541 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-04 00:45:17.358545 | orchestrator | Saturday 04 April 2026 00:45:11 +0000 (0:00:00.133) 0:00:14.711 ******** 2026-04-04 00:45:17.358548 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:17.358552 | orchestrator | 2026-04-04 00:45:17.358556 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-04 00:45:17.358560 | orchestrator | Saturday 04 April 2026 00:45:11 +0000 (0:00:00.154) 0:00:14.866 ******** 2026-04-04 00:45:17.358563 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:17.358567 | orchestrator | 2026-04-04 00:45:17.358571 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-04 00:45:17.358575 | orchestrator | Saturday 04 April 2026 00:45:12 +0000 (0:00:00.952) 0:00:15.818 ******** 2026-04-04 00:45:17.358578 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-04 00:45:17.358582 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-04 00:45:17.358587 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-04 00:45:17.358591 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-04 00:45:17.358595 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-04 00:45:17.358598 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-04 00:45:17.358606 | orchestrator | 2026-04-04 00:45:17.358610 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-04 00:45:17.358614 | orchestrator | Saturday 04 April 2026 00:45:14 +0000 (0:00:02.234) 0:00:18.052 ******** 2026-04-04 00:45:17.358617 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:17.358621 | orchestrator | 2026-04-04 00:45:17.358625 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-04 00:45:17.358629 | orchestrator | Saturday 04 April 2026 00:45:15 +0000 (0:00:01.097) 0:00:19.149 ******** 2026-04-04 00:45:17.358633 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:17.358649 | orchestrator | 2026-04-04 00:45:17.358660 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:17.358664 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:45:17.358674 | orchestrator | 2026-04-04 00:45:17.358678 | orchestrator | 2026-04-04 00:45:17.358693 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:17.358697 | orchestrator | Saturday 04 April 2026 00:45:17 +0000 (0:00:01.311) 0:00:20.461 ******** 2026-04-04 00:45:17.358701 | orchestrator | =============================================================================== 2026-04-04 00:45:17.358739 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.07s 2026-04-04 00:45:17.358755 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-04-04 00:45:17.358759 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.39s 2026-04-04 00:45:17.358763 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-04-04 00:45:17.358767 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2026-04-04 00:45:17.358770 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.10s 2026-04-04 00:45:17.358774 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-04-04 00:45:17.358778 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.95s 2026-04-04 00:45:17.358782 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.89s 2026-04-04 00:45:17.358786 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.29s 2026-04-04 00:45:17.358789 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-04 00:45:17.358793 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-04 00:45:17.358797 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-04 00:45:17.358801 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-04 00:45:17.358804 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-04 00:45:17.487248 | orchestrator | 2026-04-04 00:45:17.488193 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Apr 4 00:45:17 UTC 2026 2026-04-04 00:45:17.488233 | orchestrator | 2026-04-04 00:45:18.468420 | orchestrator | 2026-04-04 00:45:18 | INFO  | Collection nutshell is prepared for execution 2026-04-04 00:45:18.566395 | orchestrator | 2026-04-04 00:45:18 | INFO  | A [0] - dotfiles 2026-04-04 00:45:28.715536 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - homer 2026-04-04 00:45:28.715593 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - netdata 2026-04-04 00:45:28.715599 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - openstackclient 2026-04-04 00:45:28.715885 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - phpmyadmin 2026-04-04 00:45:28.716078 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - common 2026-04-04 00:45:28.720885 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- loadbalancer 2026-04-04 00:45:28.721029 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [2] --- opensearch 2026-04-04 00:45:28.721054 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [2] --- mariadb-ng 2026-04-04 00:45:28.721485 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [3] ---- horizon 2026-04-04 00:45:28.721539 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [3] ---- keystone 2026-04-04 00:45:28.721675 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- neutron 2026-04-04 00:45:28.722135 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ wait-for-nova 2026-04-04 00:45:28.722271 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [6] ------- octavia 2026-04-04 00:45:28.723833 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- barbican 2026-04-04 00:45:28.723862 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- designate 2026-04-04 00:45:28.723990 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- ironic 2026-04-04 00:45:28.724128 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- placement 2026-04-04 00:45:28.724375 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- magnum 2026-04-04 00:45:28.725990 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- openvswitch 2026-04-04 00:45:28.726276 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [2] --- ovn 2026-04-04 00:45:28.726538 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- memcached 2026-04-04 00:45:28.726587 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- redis 2026-04-04 00:45:28.726815 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- rabbitmq-ng 2026-04-04 00:45:28.727190 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - kubernetes 2026-04-04 00:45:28.729600 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- kubeconfig 2026-04-04 00:45:28.729618 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- copy-kubeconfig 2026-04-04 00:45:28.730104 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [0] - ceph 2026-04-04 00:45:28.732142 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [1] -- ceph-pools 2026-04-04 00:45:28.732285 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [2] --- copy-ceph-keys 2026-04-04 00:45:28.732431 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [3] ---- cephclient 2026-04-04 00:45:28.732574 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-04 00:45:28.732587 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- wait-for-keystone 2026-04-04 00:45:28.732878 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-04 00:45:28.732993 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ glance 2026-04-04 00:45:28.733003 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ cinder 2026-04-04 00:45:28.733213 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ nova 2026-04-04 00:45:28.733395 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [4] ----- prometheus 2026-04-04 00:45:28.733520 | orchestrator | 2026-04-04 00:45:28 | INFO  | A [5] ------ grafana 2026-04-04 00:45:28.930185 | orchestrator | 2026-04-04 00:45:28 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-04 00:45:28.930273 | orchestrator | 2026-04-04 00:45:28 | INFO  | Tasks are running in the background 2026-04-04 00:45:30.794211 | orchestrator | 2026-04-04 00:45:30 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-04 00:45:32.998612 | orchestrator | 2026-04-04 00:45:32 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:32.998748 | orchestrator | 2026-04-04 00:45:32 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:32.998943 | orchestrator | 2026-04-04 00:45:32 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:33.000778 | orchestrator | 2026-04-04 00:45:32 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:33.001336 | orchestrator | 2026-04-04 00:45:33 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:33.002084 | orchestrator | 2026-04-04 00:45:33 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:33.005199 | orchestrator | 2026-04-04 00:45:33 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:33.005276 | orchestrator | 2026-04-04 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:36.076679 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:36.076897 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:36.076933 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:36.076951 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:36.079454 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:36.080075 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:36.082898 | orchestrator | 2026-04-04 00:45:36 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:36.082964 | orchestrator | 2026-04-04 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:39.111552 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:39.111619 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:39.114174 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:39.119606 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:39.119679 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:39.125400 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:39.125485 | orchestrator | 2026-04-04 00:45:39 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:39.125492 | orchestrator | 2026-04-04 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:42.195622 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:42.195775 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:42.195787 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:42.195793 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:42.195798 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:42.195803 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:42.195827 | orchestrator | 2026-04-04 00:45:42 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:42.195833 | orchestrator | 2026-04-04 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:45.279784 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:45.279856 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:45.279863 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:45.279868 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:45.279872 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:45.279876 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:45.279880 | orchestrator | 2026-04-04 00:45:45 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:45.279884 | orchestrator | 2026-04-04 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:48.358470 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:48.359676 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:48.360181 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:48.360941 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:48.362553 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:48.363255 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:48.371950 | orchestrator | 2026-04-04 00:45:48 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:48.372014 | orchestrator | 2026-04-04 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:51.414318 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:51.414814 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:51.417325 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:51.422198 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:51.422905 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state STARTED 2026-04-04 00:45:51.424200 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:51.425862 | orchestrator | 2026-04-04 00:45:51 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:51.426438 | orchestrator | 2026-04-04 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:54.505385 | orchestrator | 2026-04-04 00:45:54.505457 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-04 00:45:54.505464 | orchestrator | 2026-04-04 00:45:54.505469 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-04 00:45:54.505479 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.813) 0:00:00.813 ******** 2026-04-04 00:45:54.505498 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:54.505504 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:54.505508 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:54.505511 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:54.505515 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:54.505519 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:54.505523 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:54.505526 | orchestrator | 2026-04-04 00:45:54.505530 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-04 00:45:54.505534 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:04.195) 0:00:05.008 ******** 2026-04-04 00:45:54.505538 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:45:54.505543 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:45:54.505547 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:45:54.505550 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:45:54.505554 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:45:54.505558 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:45:54.505562 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:45:54.505566 | orchestrator | 2026-04-04 00:45:54.505570 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-04 00:45:54.505574 | orchestrator | Saturday 04 April 2026 00:45:45 +0000 (0:00:02.658) 0:00:07.667 ******** 2026-04-04 00:45:54.505581 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:45.189902', 'end': '2026-04-04 00:45:45.194841', 'delta': '0:00:00.004939', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505587 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:43.571180', 'end': '2026-04-04 00:45:43.579072', 'delta': '0:00:00.007892', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505594 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:43.931049', 'end': '2026-04-04 00:45:43.937357', 'delta': '0:00:00.006308', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505637 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:44.560068', 'end': '2026-04-04 00:45:44.565791', 'delta': '0:00:00.005723', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505656 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:43.417568', 'end': '2026-04-04 00:45:43.424589', 'delta': '0:00:00.007021', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505662 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:45.177545', 'end': '2026-04-04 00:45:45.183033', 'delta': '0:00:00.005488', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505668 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:45:43.633968', 'end': '2026-04-04 00:45:43.637258', 'delta': '0:00:00.003290', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:45:54.505674 | orchestrator | 2026-04-04 00:45:54.505680 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-04 00:45:54.505744 | orchestrator | Saturday 04 April 2026 00:45:46 +0000 (0:00:01.620) 0:00:09.288 ******** 2026-04-04 00:45:54.505754 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:45:54.505760 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:45:54.505765 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:45:54.505770 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:45:54.505781 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:45:54.505787 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:45:54.505792 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:45:54.505797 | orchestrator | 2026-04-04 00:45:54.505803 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-04 00:45:54.505808 | orchestrator | Saturday 04 April 2026 00:45:48 +0000 (0:00:01.386) 0:00:10.675 ******** 2026-04-04 00:45:54.505814 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:45:54.505820 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:45:54.505862 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:45:54.505870 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:45:54.505875 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:45:54.505881 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:45:54.505886 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:45:54.505891 | orchestrator | 2026-04-04 00:45:54.505897 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:54.505911 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505920 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505926 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505932 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505938 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505944 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505950 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:54.505956 | orchestrator | 2026-04-04 00:45:54.505962 | orchestrator | 2026-04-04 00:45:54.505969 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:54.505976 | orchestrator | Saturday 04 April 2026 00:45:50 +0000 (0:00:02.566) 0:00:13.242 ******** 2026-04-04 00:45:54.505982 | orchestrator | =============================================================================== 2026-04-04 00:45:54.506251 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.20s 2026-04-04 00:45:54.506274 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.66s 2026-04-04 00:45:54.506280 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.57s 2026-04-04 00:45:54.506287 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.62s 2026-04-04 00:45:54.506294 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.39s 2026-04-04 00:45:54.506301 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:54.506308 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:54.506315 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:54.506322 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:54.506329 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 634e807f-dd06-413b-9743-902281aef99f is in state SUCCESS 2026-04-04 00:45:54.506344 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:54.506348 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:45:54.506352 | orchestrator | 2026-04-04 00:45:54 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:54.506357 | orchestrator | 2026-04-04 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:57.532519 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:45:57.535734 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:45:57.535824 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:45:57.536394 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:45:57.539447 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:45:57.539762 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:45:57.541457 | orchestrator | 2026-04-04 00:45:57 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:45:57.541486 | orchestrator | 2026-04-04 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:00.604643 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:00.605280 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:00.606212 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:00.607394 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:00.609232 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:00.609615 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:00.610609 | orchestrator | 2026-04-04 00:46:00 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:46:00.610664 | orchestrator | 2026-04-04 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:03.645042 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:03.645202 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:03.648145 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:03.651019 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:03.651076 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:03.651272 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:03.652457 | orchestrator | 2026-04-04 00:46:03 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:46:03.652482 | orchestrator | 2026-04-04 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:06.748924 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:06.749013 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:06.749023 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:06.749030 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:06.749037 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:06.749044 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:06.749051 | orchestrator | 2026-04-04 00:46:06 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:46:06.749059 | orchestrator | 2026-04-04 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:09.863643 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:09.865120 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:09.866595 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:09.867988 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:09.869211 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:09.873937 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:09.875499 | orchestrator | 2026-04-04 00:46:09 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:46:09.875539 | orchestrator | 2026-04-04 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:12.985884 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:12.985971 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:12.985981 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:12.985989 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:12.985997 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:12.986004 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:12.986058 | orchestrator | 2026-04-04 00:46:12 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state STARTED 2026-04-04 00:46:12.986069 | orchestrator | 2026-04-04 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:16.037903 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:16.041391 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:16.041435 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:16.043657 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:16.045190 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:16.048267 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:16.048307 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 119b22f7-47ad-4174-8f24-b4cc5c280168 is in state SUCCESS 2026-04-04 00:46:16.048315 | orchestrator | 2026-04-04 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:19.082589 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:19.085082 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:19.085153 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:19.085159 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:19.085164 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:19.085168 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:19.085174 | orchestrator | 2026-04-04 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:22.253566 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:22.253639 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:22.253647 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:22.253654 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state STARTED 2026-04-04 00:46:22.253660 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:22.253666 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:22.253716 | orchestrator | 2026-04-04 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:25.248422 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:25.249503 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:25.251651 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:25.252211 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 898bda08-6857-481d-a28f-f6f6628054d9 is in state SUCCESS 2026-04-04 00:46:25.253872 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:25.255297 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:25.255362 | orchestrator | 2026-04-04 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:28.321603 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:28.323786 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:28.324456 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:28.326286 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:28.328193 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:28.328243 | orchestrator | 2026-04-04 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:31.355085 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:31.357757 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:31.359726 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:31.360803 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:31.361899 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:31.362064 | orchestrator | 2026-04-04 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:34.413078 | orchestrator | 2026-04-04 00:46:34 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:34.413906 | orchestrator | 2026-04-04 00:46:34 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:34.415967 | orchestrator | 2026-04-04 00:46:34 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:34.417207 | orchestrator | 2026-04-04 00:46:34 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:34.419212 | orchestrator | 2026-04-04 00:46:34 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:34.419353 | orchestrator | 2026-04-04 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:37.454436 | orchestrator | 2026-04-04 00:46:37 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:37.455915 | orchestrator | 2026-04-04 00:46:37 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:37.457249 | orchestrator | 2026-04-04 00:46:37 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:37.458479 | orchestrator | 2026-04-04 00:46:37 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:37.460275 | orchestrator | 2026-04-04 00:46:37 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:37.460314 | orchestrator | 2026-04-04 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:40.507480 | orchestrator | 2026-04-04 00:46:40 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:40.508235 | orchestrator | 2026-04-04 00:46:40 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:40.510112 | orchestrator | 2026-04-04 00:46:40 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:40.511005 | orchestrator | 2026-04-04 00:46:40 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:40.511781 | orchestrator | 2026-04-04 00:46:40 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:40.512138 | orchestrator | 2026-04-04 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:43.567773 | orchestrator | 2026-04-04 00:46:43 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:43.568705 | orchestrator | 2026-04-04 00:46:43 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:43.570651 | orchestrator | 2026-04-04 00:46:43 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:43.572192 | orchestrator | 2026-04-04 00:46:43 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:43.573513 | orchestrator | 2026-04-04 00:46:43 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:43.573842 | orchestrator | 2026-04-04 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:46.635076 | orchestrator | 2026-04-04 00:46:46 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:46.635162 | orchestrator | 2026-04-04 00:46:46 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:46.635493 | orchestrator | 2026-04-04 00:46:46 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:46.636649 | orchestrator | 2026-04-04 00:46:46 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:46.639796 | orchestrator | 2026-04-04 00:46:46 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:46.639860 | orchestrator | 2026-04-04 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:49.689180 | orchestrator | 2026-04-04 00:46:49 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:49.690892 | orchestrator | 2026-04-04 00:46:49 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:49.693359 | orchestrator | 2026-04-04 00:46:49 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:49.708277 | orchestrator | 2026-04-04 00:46:49 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:49.710457 | orchestrator | 2026-04-04 00:46:49 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:49.711818 | orchestrator | 2026-04-04 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:52.759045 | orchestrator | 2026-04-04 00:46:52 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:52.763358 | orchestrator | 2026-04-04 00:46:52 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:52.766461 | orchestrator | 2026-04-04 00:46:52 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:52.768357 | orchestrator | 2026-04-04 00:46:52 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:52.769667 | orchestrator | 2026-04-04 00:46:52 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:52.769723 | orchestrator | 2026-04-04 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:55.800444 | orchestrator | 2026-04-04 00:46:55 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:55.800607 | orchestrator | 2026-04-04 00:46:55 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:55.801271 | orchestrator | 2026-04-04 00:46:55 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:55.803232 | orchestrator | 2026-04-04 00:46:55 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:55.803580 | orchestrator | 2026-04-04 00:46:55 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:55.803620 | orchestrator | 2026-04-04 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:58.836806 | orchestrator | 2026-04-04 00:46:58 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:46:58.840577 | orchestrator | 2026-04-04 00:46:58 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:46:58.841864 | orchestrator | 2026-04-04 00:46:58 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:46:58.844004 | orchestrator | 2026-04-04 00:46:58 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:46:58.845779 | orchestrator | 2026-04-04 00:46:58 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state STARTED 2026-04-04 00:46:58.845826 | orchestrator | 2026-04-04 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:01.888515 | orchestrator | 2026-04-04 00:47:01 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state STARTED 2026-04-04 00:47:01.889122 | orchestrator | 2026-04-04 00:47:01 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:01.889823 | orchestrator | 2026-04-04 00:47:01 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:01.890855 | orchestrator | 2026-04-04 00:47:01 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:01.891624 | orchestrator | 2026-04-04 00:47:01.891719 | orchestrator | 2026-04-04 00:47:01.891730 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-04 00:47:01.891737 | orchestrator | 2026-04-04 00:47:01.891743 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-04 00:47:01.891751 | orchestrator | Saturday 04 April 2026 00:45:37 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-04-04 00:47:01.891758 | orchestrator | ok: [testbed-manager] => { 2026-04-04 00:47:01.891766 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-04 00:47:01.891773 | orchestrator | } 2026-04-04 00:47:01.891779 | orchestrator | 2026-04-04 00:47:01.891785 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-04 00:47:01.891791 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.127) 0:00:00.403 ******** 2026-04-04 00:47:01.891798 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.891805 | orchestrator | 2026-04-04 00:47:01.891811 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-04 00:47:01.891818 | orchestrator | Saturday 04 April 2026 00:45:40 +0000 (0:00:01.961) 0:00:02.364 ******** 2026-04-04 00:47:01.891824 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-04 00:47:01.891830 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-04 00:47:01.891836 | orchestrator | 2026-04-04 00:47:01.891842 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-04 00:47:01.891849 | orchestrator | Saturday 04 April 2026 00:45:40 +0000 (0:00:00.935) 0:00:03.300 ******** 2026-04-04 00:47:01.891856 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.891862 | orchestrator | 2026-04-04 00:47:01.891868 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-04 00:47:01.891875 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:01.822) 0:00:05.123 ******** 2026-04-04 00:47:01.891882 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.891888 | orchestrator | 2026-04-04 00:47:01.891895 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-04 00:47:01.891901 | orchestrator | Saturday 04 April 2026 00:45:44 +0000 (0:00:01.708) 0:00:06.831 ******** 2026-04-04 00:47:01.891908 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-04 00:47:01.891914 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.891938 | orchestrator | 2026-04-04 00:47:01.891945 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-04 00:47:01.891952 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:25.350) 0:00:32.182 ******** 2026-04-04 00:47:01.891958 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.891964 | orchestrator | 2026-04-04 00:47:01.891971 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:47:01.891977 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:01.891985 | orchestrator | 2026-04-04 00:47:01.891991 | orchestrator | 2026-04-04 00:47:01.891998 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:47:01.892004 | orchestrator | Saturday 04 April 2026 00:46:12 +0000 (0:00:03.076) 0:00:35.258 ******** 2026-04-04 00:47:01.892020 | orchestrator | =============================================================================== 2026-04-04 00:47:01.892027 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.35s 2026-04-04 00:47:01.892034 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.08s 2026-04-04 00:47:01.892040 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.96s 2026-04-04 00:47:01.892047 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.82s 2026-04-04 00:47:01.892053 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.71s 2026-04-04 00:47:01.892059 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.94s 2026-04-04 00:47:01.892066 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.13s 2026-04-04 00:47:01.892072 | orchestrator | 2026-04-04 00:47:01.892078 | orchestrator | 2026-04-04 00:47:01.892084 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-04 00:47:01.892090 | orchestrator | 2026-04-04 00:47:01.892097 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-04 00:47:01.892103 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.753) 0:00:00.753 ******** 2026-04-04 00:47:01.892110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-04 00:47:01.892118 | orchestrator | 2026-04-04 00:47:01.892124 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-04 00:47:01.892130 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.510) 0:00:01.263 ******** 2026-04-04 00:47:01.892137 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-04 00:47:01.892143 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-04 00:47:01.892150 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-04 00:47:01.892156 | orchestrator | 2026-04-04 00:47:01.892163 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-04 00:47:01.892169 | orchestrator | Saturday 04 April 2026 00:45:41 +0000 (0:00:02.486) 0:00:03.750 ******** 2026-04-04 00:47:01.892175 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892182 | orchestrator | 2026-04-04 00:47:01.892189 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-04 00:47:01.892196 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:01.728) 0:00:05.478 ******** 2026-04-04 00:47:01.892212 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-04 00:47:01.892219 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.892226 | orchestrator | 2026-04-04 00:47:01.892232 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-04 00:47:01.892238 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:32.612) 0:00:38.090 ******** 2026-04-04 00:47:01.892245 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892256 | orchestrator | 2026-04-04 00:47:01.892263 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-04 00:47:01.892269 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:02.057) 0:00:40.148 ******** 2026-04-04 00:47:01.892276 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.892282 | orchestrator | 2026-04-04 00:47:01.892289 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-04 00:47:01.892295 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:01.333) 0:00:41.481 ******** 2026-04-04 00:47:01.892302 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892309 | orchestrator | 2026-04-04 00:47:01.892343 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-04 00:47:01.892350 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:02.064) 0:00:43.546 ******** 2026-04-04 00:47:01.892356 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892371 | orchestrator | 2026-04-04 00:47:01.892378 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-04 00:47:01.892385 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:01.366) 0:00:44.913 ******** 2026-04-04 00:47:01.892392 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892399 | orchestrator | 2026-04-04 00:47:01.892406 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-04 00:47:01.892413 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.541) 0:00:45.455 ******** 2026-04-04 00:47:01.892420 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.892426 | orchestrator | 2026-04-04 00:47:01.892433 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:47:01.892439 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:01.892446 | orchestrator | 2026-04-04 00:47:01.892453 | orchestrator | 2026-04-04 00:47:01.892461 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:47:01.892468 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:00.350) 0:00:45.805 ******** 2026-04-04 00:47:01.892475 | orchestrator | =============================================================================== 2026-04-04 00:47:01.892482 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.61s 2026-04-04 00:47:01.892488 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.49s 2026-04-04 00:47:01.892494 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.07s 2026-04-04 00:47:01.892500 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.06s 2026-04-04 00:47:01.892507 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.73s 2026-04-04 00:47:01.892521 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.37s 2026-04-04 00:47:01.892528 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.33s 2026-04-04 00:47:01.892533 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.54s 2026-04-04 00:47:01.892540 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.51s 2026-04-04 00:47:01.892546 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2026-04-04 00:47:01.892552 | orchestrator | 2026-04-04 00:47:01.892559 | orchestrator | 2026-04-04 00:47:01.892565 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-04 00:47:01.892570 | orchestrator | 2026-04-04 00:47:01.892577 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-04 00:47:01.892583 | orchestrator | Saturday 04 April 2026 00:45:55 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-04-04 00:47:01.892590 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.892597 | orchestrator | 2026-04-04 00:47:01.892603 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-04 00:47:01.892609 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:01.417) 0:00:01.665 ******** 2026-04-04 00:47:01.892624 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-04 00:47:01.892633 | orchestrator | 2026-04-04 00:47:01.892639 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-04 00:47:01.892660 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.547) 0:00:02.212 ******** 2026-04-04 00:47:01.892667 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892672 | orchestrator | 2026-04-04 00:47:01.892678 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-04 00:47:01.892684 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.901) 0:00:03.113 ******** 2026-04-04 00:47:01.892689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-04 00:47:01.892696 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:01.892702 | orchestrator | 2026-04-04 00:47:01.892707 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-04 00:47:01.892713 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:57.747) 0:01:00.861 ******** 2026-04-04 00:47:01.892718 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:01.892724 | orchestrator | 2026-04-04 00:47:01.892730 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:47:01.892736 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:01.892742 | orchestrator | 2026-04-04 00:47:01.892748 | orchestrator | 2026-04-04 00:47:01.892758 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:47:01.892772 | orchestrator | Saturday 04 April 2026 00:46:59 +0000 (0:00:03.799) 0:01:04.661 ******** 2026-04-04 00:47:01.892777 | orchestrator | =============================================================================== 2026-04-04 00:47:01.892781 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.75s 2026-04-04 00:47:01.892784 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.80s 2026-04-04 00:47:01.892788 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.42s 2026-04-04 00:47:01.892792 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.90s 2026-04-04 00:47:01.892796 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2026-04-04 00:47:01.892799 | orchestrator | 2026-04-04 00:47:01 | INFO  | Task 42480841-898f-43a5-b59d-e31aaba06608 is in state SUCCESS 2026-04-04 00:47:01.892803 | orchestrator | 2026-04-04 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:04.928278 | orchestrator | 2026-04-04 00:47:04 | INFO  | Task cef32441-942c-48e6-bec3-963df53ea6ef is in state SUCCESS 2026-04-04 00:47:04.928751 | orchestrator | 2026-04-04 00:47:04.928794 | orchestrator | 2026-04-04 00:47:04.928803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:47:04.928809 | orchestrator | 2026-04-04 00:47:04.928813 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:47:04.928817 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.584) 0:00:00.584 ******** 2026-04-04 00:47:04.928823 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-04 00:47:04.928829 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-04 00:47:04.928843 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-04 00:47:04.928849 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-04 00:47:04.928855 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-04 00:47:04.928861 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-04 00:47:04.928867 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-04 00:47:04.928874 | orchestrator | 2026-04-04 00:47:04.928880 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-04 00:47:04.928904 | orchestrator | 2026-04-04 00:47:04.928911 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-04 00:47:04.928918 | orchestrator | Saturday 04 April 2026 00:45:40 +0000 (0:00:01.837) 0:00:02.421 ******** 2026-04-04 00:47:04.928935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:47:04.928946 | orchestrator | 2026-04-04 00:47:04.928952 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-04 00:47:04.928959 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:01.782) 0:00:04.203 ******** 2026-04-04 00:47:04.928966 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:47:04.928974 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:47:04.928980 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:47:04.928987 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:47:04.928993 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:47:04.928999 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:47:04.929006 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:04.929012 | orchestrator | 2026-04-04 00:47:04.929018 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-04 00:47:04.929024 | orchestrator | Saturday 04 April 2026 00:45:45 +0000 (0:00:03.407) 0:00:07.610 ******** 2026-04-04 00:47:04.929031 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:04.929037 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:47:04.929044 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:47:04.929050 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:47:04.929056 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:47:04.929063 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:47:04.929069 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:47:04.929076 | orchestrator | 2026-04-04 00:47:04.929083 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-04 00:47:04.929090 | orchestrator | Saturday 04 April 2026 00:45:49 +0000 (0:00:03.558) 0:00:11.169 ******** 2026-04-04 00:47:04.929097 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929105 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:47:04.929111 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:47:04.929118 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:47:04.929125 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:47:04.929132 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:47:04.929139 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:47:04.929146 | orchestrator | 2026-04-04 00:47:04.929153 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-04 00:47:04.929160 | orchestrator | Saturday 04 April 2026 00:45:51 +0000 (0:00:02.338) 0:00:13.508 ******** 2026-04-04 00:47:04.929166 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929173 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:47:04.929179 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:47:04.929185 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:47:04.929191 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:47:04.929197 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:47:04.929204 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:47:04.929210 | orchestrator | 2026-04-04 00:47:04.929217 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-04 00:47:04.929223 | orchestrator | Saturday 04 April 2026 00:46:02 +0000 (0:00:10.300) 0:00:23.808 ******** 2026-04-04 00:47:04.929229 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:47:04.929235 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:47:04.929242 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:47:04.929254 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:47:04.929261 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:47:04.929267 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929274 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:47:04.929281 | orchestrator | 2026-04-04 00:47:04.929288 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-04 00:47:04.929300 | orchestrator | Saturday 04 April 2026 00:46:37 +0000 (0:00:35.901) 0:00:59.710 ******** 2026-04-04 00:47:04.929307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:47:04.929315 | orchestrator | 2026-04-04 00:47:04.929321 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-04 00:47:04.929325 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:01.268) 0:01:00.978 ******** 2026-04-04 00:47:04.929328 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-04 00:47:04.929332 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-04 00:47:04.929336 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-04 00:47:04.929340 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-04 00:47:04.929354 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-04 00:47:04.929359 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-04 00:47:04.929363 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-04 00:47:04.929367 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-04 00:47:04.929372 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-04 00:47:04.929377 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-04 00:47:04.929381 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-04 00:47:04.929386 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-04 00:47:04.929390 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-04 00:47:04.929395 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-04 00:47:04.929399 | orchestrator | 2026-04-04 00:47:04.929403 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-04 00:47:04.929408 | orchestrator | Saturday 04 April 2026 00:46:43 +0000 (0:00:03.916) 0:01:04.895 ******** 2026-04-04 00:47:04.929413 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:04.929418 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:47:04.929422 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:47:04.929426 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:47:04.929431 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:47:04.929435 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:47:04.929439 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:47:04.929444 | orchestrator | 2026-04-04 00:47:04.929448 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-04 00:47:04.929453 | orchestrator | Saturday 04 April 2026 00:46:44 +0000 (0:00:01.564) 0:01:06.459 ******** 2026-04-04 00:47:04.929457 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929461 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:47:04.929466 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:47:04.929470 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:47:04.929475 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:47:04.929479 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:47:04.929484 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:47:04.929488 | orchestrator | 2026-04-04 00:47:04.929505 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-04 00:47:04.929511 | orchestrator | Saturday 04 April 2026 00:46:45 +0000 (0:00:01.294) 0:01:07.754 ******** 2026-04-04 00:47:04.929517 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:04.929523 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:47:04.929529 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:47:04.929536 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:47:04.929542 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:47:04.929548 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:47:04.929554 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:47:04.929561 | orchestrator | 2026-04-04 00:47:04.929568 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-04 00:47:04.929577 | orchestrator | Saturday 04 April 2026 00:46:47 +0000 (0:00:01.710) 0:01:09.464 ******** 2026-04-04 00:47:04.929582 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:47:04.929586 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:47:04.929592 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:47:04.929598 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:47:04.929604 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:47:04.929611 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:47:04.929617 | orchestrator | ok: [testbed-manager] 2026-04-04 00:47:04.929624 | orchestrator | 2026-04-04 00:47:04.929630 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-04 00:47:04.929637 | orchestrator | Saturday 04 April 2026 00:46:49 +0000 (0:00:01.670) 0:01:11.135 ******** 2026-04-04 00:47:04.929655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-04 00:47:04.929664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:47:04.929669 | orchestrator | 2026-04-04 00:47:04.929674 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-04 00:47:04.929678 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:01.643) 0:01:12.778 ******** 2026-04-04 00:47:04.929682 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929687 | orchestrator | 2026-04-04 00:47:04.929691 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-04 00:47:04.929696 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:02.175) 0:01:14.954 ******** 2026-04-04 00:47:04.929700 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:47:04.929705 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:47:04.929718 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:47:04.929723 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:47:04.929728 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:47:04.929732 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:47:04.929737 | orchestrator | changed: [testbed-manager] 2026-04-04 00:47:04.929741 | orchestrator | 2026-04-04 00:47:04.929746 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:47:04.929751 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929756 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929761 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929765 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929773 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929777 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929781 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:47:04.929785 | orchestrator | 2026-04-04 00:47:04.929789 | orchestrator | 2026-04-04 00:47:04.929792 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:47:04.929796 | orchestrator | Saturday 04 April 2026 00:47:04 +0000 (0:00:11.002) 0:01:25.956 ******** 2026-04-04 00:47:04.929800 | orchestrator | =============================================================================== 2026-04-04 00:47:04.929804 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 35.90s 2026-04-04 00:47:04.929811 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.00s 2026-04-04 00:47:04.929815 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.30s 2026-04-04 00:47:04.929819 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.92s 2026-04-04 00:47:04.929823 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.56s 2026-04-04 00:47:04.929827 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.41s 2026-04-04 00:47:04.929830 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.34s 2026-04-04 00:47:04.929834 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.18s 2026-04-04 00:47:04.929838 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.84s 2026-04-04 00:47:04.929842 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.78s 2026-04-04 00:47:04.929846 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.71s 2026-04-04 00:47:04.929849 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.67s 2026-04-04 00:47:04.929853 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.64s 2026-04-04 00:47:04.929857 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.56s 2026-04-04 00:47:04.929861 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.29s 2026-04-04 00:47:04.929865 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.27s 2026-04-04 00:47:04.931622 | orchestrator | 2026-04-04 00:47:04 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:04.933367 | orchestrator | 2026-04-04 00:47:04 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:04.935012 | orchestrator | 2026-04-04 00:47:04 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:04.935398 | orchestrator | 2026-04-04 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:07.966259 | orchestrator | 2026-04-04 00:47:07 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:07.967971 | orchestrator | 2026-04-04 00:47:07 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:07.969373 | orchestrator | 2026-04-04 00:47:07 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:07.969419 | orchestrator | 2026-04-04 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:11.006933 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:11.008873 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:11.010896 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:11.010955 | orchestrator | 2026-04-04 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:14.158231 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:14.160173 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:14.160526 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:14.160708 | orchestrator | 2026-04-04 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:17.198145 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:17.199824 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:17.201706 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:17.201756 | orchestrator | 2026-04-04 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:20.234620 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:20.235997 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:20.237339 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:20.237507 | orchestrator | 2026-04-04 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:23.279807 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:23.281293 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:23.282240 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:23.282266 | orchestrator | 2026-04-04 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:26.340149 | orchestrator | 2026-04-04 00:47:26 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:26.341678 | orchestrator | 2026-04-04 00:47:26 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:26.343461 | orchestrator | 2026-04-04 00:47:26 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:26.343494 | orchestrator | 2026-04-04 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:29.387391 | orchestrator | 2026-04-04 00:47:29 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:29.389330 | orchestrator | 2026-04-04 00:47:29 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:29.392044 | orchestrator | 2026-04-04 00:47:29 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:29.392100 | orchestrator | 2026-04-04 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:32.426926 | orchestrator | 2026-04-04 00:47:32 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:32.428297 | orchestrator | 2026-04-04 00:47:32 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:32.431019 | orchestrator | 2026-04-04 00:47:32 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:32.431061 | orchestrator | 2026-04-04 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:35.479801 | orchestrator | 2026-04-04 00:47:35 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:35.480801 | orchestrator | 2026-04-04 00:47:35 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:35.482344 | orchestrator | 2026-04-04 00:47:35 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:35.482477 | orchestrator | 2026-04-04 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:38.521846 | orchestrator | 2026-04-04 00:47:38 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:38.523059 | orchestrator | 2026-04-04 00:47:38 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:38.524696 | orchestrator | 2026-04-04 00:47:38 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:38.524756 | orchestrator | 2026-04-04 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:41.569228 | orchestrator | 2026-04-04 00:47:41 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:41.569473 | orchestrator | 2026-04-04 00:47:41 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:41.571575 | orchestrator | 2026-04-04 00:47:41 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:41.571673 | orchestrator | 2026-04-04 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:44.600455 | orchestrator | 2026-04-04 00:47:44 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:44.601501 | orchestrator | 2026-04-04 00:47:44 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:44.602562 | orchestrator | 2026-04-04 00:47:44 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:44.602605 | orchestrator | 2026-04-04 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:47.630650 | orchestrator | 2026-04-04 00:47:47 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:47.632040 | orchestrator | 2026-04-04 00:47:47 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:47.633219 | orchestrator | 2026-04-04 00:47:47 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:47.633397 | orchestrator | 2026-04-04 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:50.670955 | orchestrator | 2026-04-04 00:47:50 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:50.672782 | orchestrator | 2026-04-04 00:47:50 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:50.674515 | orchestrator | 2026-04-04 00:47:50 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:50.674844 | orchestrator | 2026-04-04 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:53.712559 | orchestrator | 2026-04-04 00:47:53 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:53.714186 | orchestrator | 2026-04-04 00:47:53 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:53.717185 | orchestrator | 2026-04-04 00:47:53 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:53.717327 | orchestrator | 2026-04-04 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:56.749974 | orchestrator | 2026-04-04 00:47:56 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:56.750824 | orchestrator | 2026-04-04 00:47:56 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:56.751562 | orchestrator | 2026-04-04 00:47:56 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:56.751593 | orchestrator | 2026-04-04 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:59.797527 | orchestrator | 2026-04-04 00:47:59 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:47:59.799813 | orchestrator | 2026-04-04 00:47:59 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:47:59.801601 | orchestrator | 2026-04-04 00:47:59 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:47:59.801769 | orchestrator | 2026-04-04 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:02.842000 | orchestrator | 2026-04-04 00:48:02 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:02.844373 | orchestrator | 2026-04-04 00:48:02 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:02.846009 | orchestrator | 2026-04-04 00:48:02 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:48:02.846148 | orchestrator | 2026-04-04 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:05.880960 | orchestrator | 2026-04-04 00:48:05 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:05.882035 | orchestrator | 2026-04-04 00:48:05 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:05.883049 | orchestrator | 2026-04-04 00:48:05 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:48:05.883079 | orchestrator | 2026-04-04 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:08.914772 | orchestrator | 2026-04-04 00:48:08 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:08.915074 | orchestrator | 2026-04-04 00:48:08 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:08.915770 | orchestrator | 2026-04-04 00:48:08 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:48:08.915794 | orchestrator | 2026-04-04 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:11.948995 | orchestrator | 2026-04-04 00:48:11 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:11.949645 | orchestrator | 2026-04-04 00:48:11 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:11.950548 | orchestrator | 2026-04-04 00:48:11 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:48:11.950721 | orchestrator | 2026-04-04 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:14.986493 | orchestrator | 2026-04-04 00:48:14 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:14.986584 | orchestrator | 2026-04-04 00:48:14 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:14.988881 | orchestrator | 2026-04-04 00:48:14 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state STARTED 2026-04-04 00:48:14.988942 | orchestrator | 2026-04-04 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:18.018285 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:18.020759 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:18.023739 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:18.024132 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:18.024646 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:18.027585 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 4401209e-65a8-4ada-8e7c-b8ec35a94253 is in state SUCCESS 2026-04-04 00:48:18.029168 | orchestrator | 2026-04-04 00:48:18.029267 | orchestrator | 2026-04-04 00:48:18.029276 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-04 00:48:18.029282 | orchestrator | 2026-04-04 00:48:18.029297 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-04 00:48:18.029303 | orchestrator | Saturday 04 April 2026 00:45:32 +0000 (0:00:00.240) 0:00:00.240 ******** 2026-04-04 00:48:18.029318 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:48:18.029324 | orchestrator | 2026-04-04 00:48:18.029329 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-04 00:48:18.029333 | orchestrator | Saturday 04 April 2026 00:45:33 +0000 (0:00:01.114) 0:00:01.355 ******** 2026-04-04 00:48:18.029338 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029343 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029347 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029352 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029356 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029361 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029365 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029369 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029373 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029378 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029384 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029388 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029392 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029396 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029403 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:48:18.029407 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029412 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029416 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029420 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029425 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:48:18.029429 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:48:18.029433 | orchestrator | 2026-04-04 00:48:18.029438 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-04 00:48:18.029442 | orchestrator | Saturday 04 April 2026 00:45:37 +0000 (0:00:03.861) 0:00:05.217 ******** 2026-04-04 00:48:18.029447 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:48:18.029452 | orchestrator | 2026-04-04 00:48:18.029456 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-04 00:48:18.029461 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:01.249) 0:00:06.466 ******** 2026-04-04 00:48:18.029467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029496 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029530 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.029550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029561 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.029676 | orchestrator | 2026-04-04 00:48:18.029683 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-04 00:48:18.029690 | orchestrator | Saturday 04 April 2026 00:45:43 +0000 (0:00:04.621) 0:00:11.088 ******** 2026-04-04 00:48:18.029702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029721 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029750 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029762 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.029767 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.029773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029797 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.029810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029816 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.029821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029850 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.029855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029866 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.029874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029886 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.029891 | orchestrator | 2026-04-04 00:48:18.029896 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-04 00:48:18.029901 | orchestrator | Saturday 04 April 2026 00:45:46 +0000 (0:00:03.588) 0:00:14.676 ******** 2026-04-04 00:48:18.029907 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029912 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029923 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029928 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.029933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029960 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.029965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.029987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.029992 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.029998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030009 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.030060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.030069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.030661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030817 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.030839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.030848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030855 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.030861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.030874 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.030881 | orchestrator | 2026-04-04 00:48:18.030889 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-04 00:48:18.030897 | orchestrator | Saturday 04 April 2026 00:45:51 +0000 (0:00:04.152) 0:00:18.829 ******** 2026-04-04 00:48:18.030903 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.030909 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.030916 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.030922 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.030928 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.030966 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.030973 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.030979 | orchestrator | 2026-04-04 00:48:18.030986 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-04 00:48:18.030992 | orchestrator | Saturday 04 April 2026 00:45:52 +0000 (0:00:01.143) 0:00:19.972 ******** 2026-04-04 00:48:18.031006 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.031013 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.031021 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.031029 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.031036 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.031043 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.031050 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.031058 | orchestrator | 2026-04-04 00:48:18.031065 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-04 00:48:18.031073 | orchestrator | Saturday 04 April 2026 00:45:54 +0000 (0:00:01.832) 0:00:21.804 ******** 2026-04-04 00:48:18.031080 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.031087 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.031094 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.031103 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.031111 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.031120 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.031129 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.031137 | orchestrator | 2026-04-04 00:48:18.031146 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-04 00:48:18.031155 | orchestrator | Saturday 04 April 2026 00:45:55 +0000 (0:00:01.550) 0:00:23.355 ******** 2026-04-04 00:48:18.031164 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.031172 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.031181 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.031190 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.031198 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.031207 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.031216 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.031225 | orchestrator | 2026-04-04 00:48:18.031233 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-04 00:48:18.031242 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:02.590) 0:00:25.945 ******** 2026-04-04 00:48:18.031256 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031285 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031333 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.031357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.031496 | orchestrator | 2026-04-04 00:48:18.031504 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-04 00:48:18.031516 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:05.169) 0:00:31.115 ******** 2026-04-04 00:48:18.031529 | orchestrator | [WARNING]: Skipped 2026-04-04 00:48:18.031543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-04 00:48:18.031557 | orchestrator | to this access issue: 2026-04-04 00:48:18.031570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-04 00:48:18.031581 | orchestrator | directory 2026-04-04 00:48:18.031589 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:48:18.031619 | orchestrator | 2026-04-04 00:48:18.031627 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-04 00:48:18.031634 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:00.961) 0:00:32.077 ******** 2026-04-04 00:48:18.031641 | orchestrator | [WARNING]: Skipped 2026-04-04 00:48:18.031648 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-04 00:48:18.031656 | orchestrator | to this access issue: 2026-04-04 00:48:18.031663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-04 00:48:18.031670 | orchestrator | directory 2026-04-04 00:48:18.031677 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:48:18.031685 | orchestrator | 2026-04-04 00:48:18.031692 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-04 00:48:18.031699 | orchestrator | Saturday 04 April 2026 00:46:05 +0000 (0:00:00.941) 0:00:33.019 ******** 2026-04-04 00:48:18.031706 | orchestrator | [WARNING]: Skipped 2026-04-04 00:48:18.031713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-04 00:48:18.031721 | orchestrator | to this access issue: 2026-04-04 00:48:18.031728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-04 00:48:18.031735 | orchestrator | directory 2026-04-04 00:48:18.031743 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:48:18.031755 | orchestrator | 2026-04-04 00:48:18.031771 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-04 00:48:18.031785 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:01.061) 0:00:34.081 ******** 2026-04-04 00:48:18.031797 | orchestrator | [WARNING]: Skipped 2026-04-04 00:48:18.031808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-04 00:48:18.031820 | orchestrator | to this access issue: 2026-04-04 00:48:18.031838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-04 00:48:18.031859 | orchestrator | directory 2026-04-04 00:48:18.031872 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:48:18.031884 | orchestrator | 2026-04-04 00:48:18.031896 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-04 00:48:18.031908 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:00.952) 0:00:35.033 ******** 2026-04-04 00:48:18.031920 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.031932 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.031944 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.031954 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.031962 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.031969 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.031976 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.031984 | orchestrator | 2026-04-04 00:48:18.031991 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-04 00:48:18.031998 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:04.412) 0:00:39.445 ******** 2026-04-04 00:48:18.032006 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032015 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032022 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032037 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032044 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032051 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:48:18.032058 | orchestrator | 2026-04-04 00:48:18.032066 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-04 00:48:18.032073 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:04.025) 0:00:43.471 ******** 2026-04-04 00:48:18.032081 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.032088 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.032096 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.032103 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.032110 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.032118 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.032125 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.032132 | orchestrator | 2026-04-04 00:48:18.032139 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-04 00:48:18.032147 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:02.498) 0:00:45.969 ******** 2026-04-04 00:48:18.032165 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032188 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032201 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032233 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032245 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032266 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032284 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032322 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032359 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032379 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.032410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032423 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032435 | orchestrator | 2026-04-04 00:48:18.032447 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-04 00:48:18.032459 | orchestrator | Saturday 04 April 2026 00:46:20 +0000 (0:00:02.650) 0:00:48.619 ******** 2026-04-04 00:48:18.032471 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032484 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032497 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032507 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032520 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032540 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032553 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:18.032564 | orchestrator | 2026-04-04 00:48:18.032575 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-04 00:48:18.032587 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:02.496) 0:00:51.116 ******** 2026-04-04 00:48:18.032620 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032653 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032663 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032686 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032699 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:48:18.032712 | orchestrator | 2026-04-04 00:48:18.032732 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-04 00:48:18.032745 | orchestrator | Saturday 04 April 2026 00:46:25 +0000 (0:00:02.316) 0:00:53.433 ******** 2026-04-04 00:48:18.032755 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032800 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032863 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:48:18.032899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:48:18.032956 | orchestrator | 2026-04-04 00:48:18.032964 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-04 00:48:18.032972 | orchestrator | Saturday 04 April 2026 00:46:29 +0000 (0:00:04.111) 0:00:57.544 ******** 2026-04-04 00:48:18.032984 | orchestrator | changed: [testbed-manager] => { 2026-04-04 00:48:18.032991 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.032999 | orchestrator | } 2026-04-04 00:48:18.033006 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:48:18.033014 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033021 | orchestrator | } 2026-04-04 00:48:18.033028 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:48:18.033035 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033042 | orchestrator | } 2026-04-04 00:48:18.033049 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:48:18.033056 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033063 | orchestrator | } 2026-04-04 00:48:18.033073 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 00:48:18.033084 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033102 | orchestrator | } 2026-04-04 00:48:18.033114 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 00:48:18.033125 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033136 | orchestrator | } 2026-04-04 00:48:18.033147 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 00:48:18.033158 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:18.033171 | orchestrator | } 2026-04-04 00:48:18.033184 | orchestrator | 2026-04-04 00:48:18.033195 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:48:18.033207 | orchestrator | Saturday 04 April 2026 00:46:30 +0000 (0:00:00.773) 0:00:58.318 ******** 2026-04-04 00:48:18.033227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033316 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:48:18.033323 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:18.033331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033364 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:18.033372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033416 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:18.033423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033431 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:18.033438 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:18.033445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:48:18.033466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:48:18.033482 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:18.033489 | orchestrator | 2026-04-04 00:48:18.033497 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-04 00:48:18.033504 | orchestrator | Saturday 04 April 2026 00:46:32 +0000 (0:00:01.623) 0:00:59.941 ******** 2026-04-04 00:48:18.033515 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.033532 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.033547 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.033559 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.033570 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.033582 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.033629 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.033642 | orchestrator | 2026-04-04 00:48:18.033654 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-04 00:48:18.033662 | orchestrator | Saturday 04 April 2026 00:46:33 +0000 (0:00:01.619) 0:01:01.561 ******** 2026-04-04 00:48:18.033669 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.033676 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.033684 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.033691 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.033698 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.033705 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.033713 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.033720 | orchestrator | 2026-04-04 00:48:18.033730 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033741 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:01.589) 0:01:03.150 ******** 2026-04-04 00:48:18.033756 | orchestrator | 2026-04-04 00:48:18.033774 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033785 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.116) 0:01:03.267 ******** 2026-04-04 00:48:18.033796 | orchestrator | 2026-04-04 00:48:18.033808 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033819 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.087) 0:01:03.356 ******** 2026-04-04 00:48:18.033829 | orchestrator | 2026-04-04 00:48:18.033849 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033860 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.083) 0:01:03.440 ******** 2026-04-04 00:48:18.033872 | orchestrator | 2026-04-04 00:48:18.033882 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033893 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.099) 0:01:03.539 ******** 2026-04-04 00:48:18.033903 | orchestrator | 2026-04-04 00:48:18.033914 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033924 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.064) 0:01:03.604 ******** 2026-04-04 00:48:18.033946 | orchestrator | 2026-04-04 00:48:18.033958 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:48:18.033970 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.116) 0:01:03.721 ******** 2026-04-04 00:48:18.033981 | orchestrator | 2026-04-04 00:48:18.033993 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-04 00:48:18.034005 | orchestrator | Saturday 04 April 2026 00:46:36 +0000 (0:00:00.129) 0:01:03.850 ******** 2026-04-04 00:48:18.034078 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.034099 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.034112 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.034124 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.034136 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.034146 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.034157 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.034170 | orchestrator | 2026-04-04 00:48:18.034181 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-04 00:48:18.034195 | orchestrator | Saturday 04 April 2026 00:47:11 +0000 (0:00:35.672) 0:01:39.523 ******** 2026-04-04 00:48:18.034207 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.034219 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.034231 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.034239 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.034247 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.034254 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.034261 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.034268 | orchestrator | 2026-04-04 00:48:18.034276 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-04 00:48:18.034283 | orchestrator | Saturday 04 April 2026 00:48:04 +0000 (0:00:52.841) 0:02:32.365 ******** 2026-04-04 00:48:18.034290 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:18.034298 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:18.034305 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:18.034313 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:18.034320 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:18.034327 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:18.034334 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:18.034342 | orchestrator | 2026-04-04 00:48:18.034355 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-04 00:48:18.034363 | orchestrator | Saturday 04 April 2026 00:48:06 +0000 (0:00:01.612) 0:02:33.978 ******** 2026-04-04 00:48:18.034370 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:18.034378 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:18.034385 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:18.034392 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:18.034399 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:18.034406 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:18.034413 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:18.034421 | orchestrator | 2026-04-04 00:48:18.034428 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:18.034438 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034448 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034456 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034463 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034471 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034487 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034495 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:48:18.034502 | orchestrator | 2026-04-04 00:48:18.034511 | orchestrator | 2026-04-04 00:48:18.034524 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:18.034544 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:09.012) 0:02:42.990 ******** 2026-04-04 00:48:18.034556 | orchestrator | =============================================================================== 2026-04-04 00:48:18.034568 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 52.84s 2026-04-04 00:48:18.034580 | orchestrator | common : Restart fluentd container ------------------------------------- 35.67s 2026-04-04 00:48:18.034609 | orchestrator | common : Restart cron container ----------------------------------------- 9.01s 2026-04-04 00:48:18.034622 | orchestrator | common : Copying over config.json files for services -------------------- 5.17s 2026-04-04 00:48:18.034643 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.62s 2026-04-04 00:48:18.034656 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.41s 2026-04-04 00:48:18.034664 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.15s 2026-04-04 00:48:18.034671 | orchestrator | service-check-containers : common | Check containers -------------------- 4.11s 2026-04-04 00:48:18.034678 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.03s 2026-04-04 00:48:18.034686 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.86s 2026-04-04 00:48:18.034693 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.59s 2026-04-04 00:48:18.034701 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.65s 2026-04-04 00:48:18.034708 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.59s 2026-04-04 00:48:18.034716 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.50s 2026-04-04 00:48:18.034723 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.50s 2026-04-04 00:48:18.034730 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.32s 2026-04-04 00:48:18.034737 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.83s 2026-04-04 00:48:18.034745 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.62s 2026-04-04 00:48:18.034752 | orchestrator | common : Creating log volume -------------------------------------------- 1.62s 2026-04-04 00:48:18.034759 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.61s 2026-04-04 00:48:18.034766 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:18.034774 | orchestrator | 2026-04-04 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:21.112487 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:21.112575 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:21.112582 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:21.112676 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:21.112733 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:21.112753 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:21.112757 | orchestrator | 2026-04-04 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:24.120062 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:24.120204 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:24.120999 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:24.121737 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:24.122419 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:24.122958 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:24.122988 | orchestrator | 2026-04-04 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:27.182069 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:27.182164 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:27.182871 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:27.183780 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:27.186500 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:27.187078 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:27.187152 | orchestrator | 2026-04-04 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:30.224124 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:30.224231 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:30.225165 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:30.225861 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:30.227365 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:30.228170 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:30.228198 | orchestrator | 2026-04-04 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:33.258055 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:33.258214 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:33.259136 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:33.259701 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:33.260385 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:33.261322 | orchestrator | 2026-04-04 00:48:33 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:33.261365 | orchestrator | 2026-04-04 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:36.316067 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:36.316135 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:36.316441 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state STARTED 2026-04-04 00:48:36.318520 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:36.318934 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:36.319521 | orchestrator | 2026-04-04 00:48:36 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:36.319555 | orchestrator | 2026-04-04 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:39.371958 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:39.373551 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:39.373877 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task a019a29a-f85d-498c-aa13-37e208b5b1d8 is in state SUCCESS 2026-04-04 00:48:39.374040 | orchestrator | 2026-04-04 00:48:39.374056 | orchestrator | 2026-04-04 00:48:39.374063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:48:39.374069 | orchestrator | 2026-04-04 00:48:39.374075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:48:39.374080 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.594) 0:00:00.594 ******** 2026-04-04 00:48:39.374086 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:39.374092 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:39.374097 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:39.374103 | orchestrator | 2026-04-04 00:48:39.374108 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:48:39.374114 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.583) 0:00:01.177 ******** 2026-04-04 00:48:39.374119 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-04 00:48:39.374125 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-04 00:48:39.374130 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-04 00:48:39.374135 | orchestrator | 2026-04-04 00:48:39.374140 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-04 00:48:39.374146 | orchestrator | 2026-04-04 00:48:39.374150 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-04 00:48:39.374154 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:00.506) 0:00:01.684 ******** 2026-04-04 00:48:39.374157 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:39.374161 | orchestrator | 2026-04-04 00:48:39.374164 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-04 00:48:39.374168 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:00.732) 0:00:02.417 ******** 2026-04-04 00:48:39.374174 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-04 00:48:39.374178 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-04 00:48:39.374181 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-04 00:48:39.374184 | orchestrator | 2026-04-04 00:48:39.374188 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-04 00:48:39.374191 | orchestrator | Saturday 04 April 2026 00:48:25 +0000 (0:00:01.661) 0:00:04.078 ******** 2026-04-04 00:48:39.374205 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-04 00:48:39.374208 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-04 00:48:39.374211 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-04 00:48:39.374215 | orchestrator | 2026-04-04 00:48:39.374218 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-04 00:48:39.374221 | orchestrator | Saturday 04 April 2026 00:48:27 +0000 (0:00:01.726) 0:00:05.805 ******** 2026-04-04 00:48:39.374227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:48:39.374235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:48:39.374244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:48:39.374248 | orchestrator | 2026-04-04 00:48:39.374251 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-04 00:48:39.374254 | orchestrator | Saturday 04 April 2026 00:48:28 +0000 (0:00:01.135) 0:00:06.940 ******** 2026-04-04 00:48:39.374257 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:48:39.374261 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:39.374264 | orchestrator | } 2026-04-04 00:48:39.374267 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:48:39.374270 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:39.374273 | orchestrator | } 2026-04-04 00:48:39.374276 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:48:39.374279 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:39.374282 | orchestrator | } 2026-04-04 00:48:39.374286 | orchestrator | 2026-04-04 00:48:39.374289 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:48:39.374292 | orchestrator | Saturday 04 April 2026 00:48:28 +0000 (0:00:00.302) 0:00:07.243 ******** 2026-04-04 00:48:39.374295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:48:39.374301 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:39.374304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:48:39.374308 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:39.374311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:48:39.374314 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:39.374317 | orchestrator | 2026-04-04 00:48:39.374321 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-04 00:48:39.374324 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:01.140) 0:00:08.384 ******** 2026-04-04 00:48:39.374327 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:39.374330 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:39.374333 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:39.374336 | orchestrator | 2026-04-04 00:48:39.374341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:39.374344 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:39.374348 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:39.374352 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:39.374355 | orchestrator | 2026-04-04 00:48:39.374358 | orchestrator | 2026-04-04 00:48:39.374361 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:39.374364 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:07.344) 0:00:15.728 ******** 2026-04-04 00:48:39.374369 | orchestrator | =============================================================================== 2026-04-04 00:48:39.374372 | orchestrator | memcached : Restart memcached container --------------------------------- 7.34s 2026-04-04 00:48:39.374375 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.73s 2026-04-04 00:48:39.374381 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.66s 2026-04-04 00:48:39.374384 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.14s 2026-04-04 00:48:39.374387 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.14s 2026-04-04 00:48:39.374390 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.73s 2026-04-04 00:48:39.374393 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2026-04-04 00:48:39.374397 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-04-04 00:48:39.374400 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.30s 2026-04-04 00:48:39.375601 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:39.376091 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:39.377261 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:39.380179 | orchestrator | 2026-04-04 00:48:39 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:39.380230 | orchestrator | 2026-04-04 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:42.451835 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:42.452246 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:42.452883 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:42.453474 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:42.454048 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:42.454762 | orchestrator | 2026-04-04 00:48:42 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:42.454805 | orchestrator | 2026-04-04 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:45.543322 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:45.543858 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:45.546193 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:45.551460 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:45.551515 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:45.551524 | orchestrator | 2026-04-04 00:48:45 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:45.551532 | orchestrator | 2026-04-04 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:48.575898 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:48.579296 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:48.582767 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:48.583642 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:48.584918 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:48.585770 | orchestrator | 2026-04-04 00:48:48 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:48.585797 | orchestrator | 2026-04-04 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:51.607549 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state STARTED 2026-04-04 00:48:51.607818 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:51.608477 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:51.609068 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:51.609729 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:51.610279 | orchestrator | 2026-04-04 00:48:51 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:51.610308 | orchestrator | 2026-04-04 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:54.657359 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task f371d87f-7202-4b77-8f31-df757f9c20f5 is in state SUCCESS 2026-04-04 00:48:54.658091 | orchestrator | 2026-04-04 00:48:54.658155 | orchestrator | 2026-04-04 00:48:54.658163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:48:54.658170 | orchestrator | 2026-04-04 00:48:54.658175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:48:54.658181 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.368) 0:00:00.368 ******** 2026-04-04 00:48:54.658187 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:54.658193 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:54.658198 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:54.658204 | orchestrator | 2026-04-04 00:48:54.658210 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:48:54.658215 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.639) 0:00:01.007 ******** 2026-04-04 00:48:54.658249 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-04 00:48:54.658255 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-04 00:48:54.658260 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-04 00:48:54.658265 | orchestrator | 2026-04-04 00:48:54.658270 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-04 00:48:54.658275 | orchestrator | 2026-04-04 00:48:54.658281 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-04 00:48:54.658286 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.580) 0:00:01.588 ******** 2026-04-04 00:48:54.658294 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:54.658301 | orchestrator | 2026-04-04 00:48:54.658307 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-04 00:48:54.658312 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:01.229) 0:00:02.818 ******** 2026-04-04 00:48:54.658319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658402 | orchestrator | 2026-04-04 00:48:54.658407 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-04 00:48:54.658413 | orchestrator | Saturday 04 April 2026 00:48:26 +0000 (0:00:02.368) 0:00:05.186 ******** 2026-04-04 00:48:54.658419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658468 | orchestrator | 2026-04-04 00:48:54.658473 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-04 00:48:54.658479 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:02.655) 0:00:07.842 ******** 2026-04-04 00:48:54.658484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658525 | orchestrator | 2026-04-04 00:48:54.658531 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-04 00:48:54.658536 | orchestrator | Saturday 04 April 2026 00:48:31 +0000 (0:00:02.848) 0:00:10.690 ******** 2026-04-04 00:48:54.658541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:48:54.658665 | orchestrator | 2026-04-04 00:48:54.658671 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-04 00:48:54.658677 | orchestrator | Saturday 04 April 2026 00:48:33 +0000 (0:00:02.084) 0:00:12.774 ******** 2026-04-04 00:48:54.658682 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:48:54.658688 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:54.658694 | orchestrator | } 2026-04-04 00:48:54.658700 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:48:54.658705 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:54.658715 | orchestrator | } 2026-04-04 00:48:54.658720 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:48:54.658726 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:48:54.658731 | orchestrator | } 2026-04-04 00:48:54.658737 | orchestrator | 2026-04-04 00:48:54.658742 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:48:54.658748 | orchestrator | Saturday 04 April 2026 00:48:34 +0000 (0:00:00.588) 0:00:13.363 ******** 2026-04-04 00:48:54.658753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658766 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:54.658771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658790 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:54.658796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-04 00:48:54.658814 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:54.658819 | orchestrator | 2026-04-04 00:48:54.658825 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:48:54.658830 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.630) 0:00:13.993 ******** 2026-04-04 00:48:54.658836 | orchestrator | 2026-04-04 00:48:54.658841 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:48:54.658847 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.106) 0:00:14.099 ******** 2026-04-04 00:48:54.658852 | orchestrator | 2026-04-04 00:48:54.658858 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:48:54.658863 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.141) 0:00:14.241 ******** 2026-04-04 00:48:54.658869 | orchestrator | 2026-04-04 00:48:54.658874 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-04 00:48:54.658880 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.236) 0:00:14.477 ******** 2026-04-04 00:48:54.658886 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:54.658891 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:54.658897 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:54.658902 | orchestrator | 2026-04-04 00:48:54.658908 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-04 00:48:54.658913 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:07.798) 0:00:22.276 ******** 2026-04-04 00:48:54.658918 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:54.658924 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:54.658930 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:54.658935 | orchestrator | 2026-04-04 00:48:54.658941 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:54.658947 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:54.658954 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:54.658960 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:48:54.658965 | orchestrator | 2026-04-04 00:48:54.658971 | orchestrator | 2026-04-04 00:48:54.658976 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:54.658981 | orchestrator | Saturday 04 April 2026 00:48:54 +0000 (0:00:10.580) 0:00:32.857 ******** 2026-04-04 00:48:54.658987 | orchestrator | =============================================================================== 2026-04-04 00:48:54.658992 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.58s 2026-04-04 00:48:54.658998 | orchestrator | redis : Restart redis container ----------------------------------------- 7.80s 2026-04-04 00:48:54.659003 | orchestrator | redis : Copying over redis config files --------------------------------- 2.85s 2026-04-04 00:48:54.659009 | orchestrator | redis : Copying over default config.json files -------------------------- 2.66s 2026-04-04 00:48:54.659017 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.37s 2026-04-04 00:48:54.659023 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.08s 2026-04-04 00:48:54.659028 | orchestrator | redis : include_tasks --------------------------------------------------- 1.23s 2026-04-04 00:48:54.659034 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-04-04 00:48:54.659043 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.63s 2026-04-04 00:48:54.659049 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.59s 2026-04-04 00:48:54.659054 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-04-04 00:48:54.659060 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.48s 2026-04-04 00:48:54.659141 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:54.659531 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:54.660300 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:54.662263 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:54.663014 | orchestrator | 2026-04-04 00:48:54 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:54.663187 | orchestrator | 2026-04-04 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:57.688838 | orchestrator | 2026-04-04 00:48:57 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:48:57.689220 | orchestrator | 2026-04-04 00:48:57 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:48:57.689884 | orchestrator | 2026-04-04 00:48:57 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:48:57.690537 | orchestrator | 2026-04-04 00:48:57 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:48:57.691221 | orchestrator | 2026-04-04 00:48:57 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:48:57.691246 | orchestrator | 2026-04-04 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:00.713401 | orchestrator | 2026-04-04 00:49:00 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:00.713662 | orchestrator | 2026-04-04 00:49:00 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:00.714299 | orchestrator | 2026-04-04 00:49:00 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:00.715586 | orchestrator | 2026-04-04 00:49:00 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:00.716056 | orchestrator | 2026-04-04 00:49:00 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:00.716071 | orchestrator | 2026-04-04 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:03.736128 | orchestrator | 2026-04-04 00:49:03 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:03.736257 | orchestrator | 2026-04-04 00:49:03 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:03.736795 | orchestrator | 2026-04-04 00:49:03 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:03.737287 | orchestrator | 2026-04-04 00:49:03 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:03.737971 | orchestrator | 2026-04-04 00:49:03 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:03.738131 | orchestrator | 2026-04-04 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:06.758949 | orchestrator | 2026-04-04 00:49:06 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:06.759534 | orchestrator | 2026-04-04 00:49:06 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:06.761608 | orchestrator | 2026-04-04 00:49:06 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:06.762241 | orchestrator | 2026-04-04 00:49:06 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:06.763066 | orchestrator | 2026-04-04 00:49:06 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:06.763094 | orchestrator | 2026-04-04 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:09.797617 | orchestrator | 2026-04-04 00:49:09 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:09.797846 | orchestrator | 2026-04-04 00:49:09 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:09.798831 | orchestrator | 2026-04-04 00:49:09 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:09.800228 | orchestrator | 2026-04-04 00:49:09 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:09.802365 | orchestrator | 2026-04-04 00:49:09 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:09.802425 | orchestrator | 2026-04-04 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:12.853534 | orchestrator | 2026-04-04 00:49:12 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:12.855118 | orchestrator | 2026-04-04 00:49:12 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:12.857050 | orchestrator | 2026-04-04 00:49:12 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:12.858739 | orchestrator | 2026-04-04 00:49:12 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:12.860205 | orchestrator | 2026-04-04 00:49:12 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:12.860534 | orchestrator | 2026-04-04 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:15.903414 | orchestrator | 2026-04-04 00:49:15 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:15.903478 | orchestrator | 2026-04-04 00:49:15 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:15.904217 | orchestrator | 2026-04-04 00:49:15 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:15.905125 | orchestrator | 2026-04-04 00:49:15 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:15.906155 | orchestrator | 2026-04-04 00:49:15 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:15.906192 | orchestrator | 2026-04-04 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:18.940099 | orchestrator | 2026-04-04 00:49:18 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:18.940455 | orchestrator | 2026-04-04 00:49:18 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:18.941670 | orchestrator | 2026-04-04 00:49:18 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:18.942587 | orchestrator | 2026-04-04 00:49:18 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:18.943494 | orchestrator | 2026-04-04 00:49:18 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:18.943530 | orchestrator | 2026-04-04 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:21.968169 | orchestrator | 2026-04-04 00:49:21 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:21.968243 | orchestrator | 2026-04-04 00:49:21 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:21.968775 | orchestrator | 2026-04-04 00:49:21 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:21.969459 | orchestrator | 2026-04-04 00:49:21 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:21.970424 | orchestrator | 2026-04-04 00:49:21 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:21.970478 | orchestrator | 2026-04-04 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:25.015771 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:25.018295 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:25.018943 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:25.023033 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:25.023587 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state STARTED 2026-04-04 00:49:25.023661 | orchestrator | 2026-04-04 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:28.088435 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:28.093935 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:28.094122 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:28.095361 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:28.096143 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:28.097787 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 1eb66581-2114-447e-bbac-06fec5d6b25a is in state SUCCESS 2026-04-04 00:49:28.099359 | orchestrator | 2026-04-04 00:49:28.099381 | orchestrator | 2026-04-04 00:49:28.099386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:49:28.099391 | orchestrator | 2026-04-04 00:49:28.099395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:49:28.099399 | orchestrator | Saturday 04 April 2026 00:48:20 +0000 (0:00:00.505) 0:00:00.505 ******** 2026-04-04 00:49:28.099404 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:28.099408 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:28.099412 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:28.099416 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:28.099420 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:28.099424 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:28.099428 | orchestrator | 2026-04-04 00:49:28.099432 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:49:28.099436 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.820) 0:00:01.325 ******** 2026-04-04 00:49:28.099440 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099444 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099448 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099465 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099469 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099473 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:49:28.099477 | orchestrator | 2026-04-04 00:49:28.099481 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-04 00:49:28.099485 | orchestrator | 2026-04-04 00:49:28.099488 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-04 00:49:28.099492 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.916) 0:00:02.242 ******** 2026-04-04 00:49:28.099497 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:49:28.099501 | orchestrator | 2026-04-04 00:49:28.099505 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 00:49:28.099510 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:02.071) 0:00:04.313 ******** 2026-04-04 00:49:28.099516 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-04 00:49:28.099523 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-04 00:49:28.099528 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-04 00:49:28.099566 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-04 00:49:28.099573 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-04 00:49:28.099579 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-04 00:49:28.099585 | orchestrator | 2026-04-04 00:49:28.099591 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 00:49:28.099598 | orchestrator | Saturday 04 April 2026 00:48:26 +0000 (0:00:01.853) 0:00:06.166 ******** 2026-04-04 00:49:28.099604 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-04 00:49:28.099611 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-04 00:49:28.099618 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-04 00:49:28.099624 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-04 00:49:28.099628 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-04 00:49:28.099632 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-04 00:49:28.099635 | orchestrator | 2026-04-04 00:49:28.099640 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 00:49:28.099643 | orchestrator | Saturday 04 April 2026 00:48:28 +0000 (0:00:02.037) 0:00:08.203 ******** 2026-04-04 00:49:28.099647 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-04 00:49:28.099651 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:28.099656 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-04 00:49:28.099659 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:28.099663 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-04 00:49:28.099667 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:28.099671 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-04 00:49:28.099674 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.099678 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-04 00:49:28.099682 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.099693 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-04 00:49:28.099697 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.099701 | orchestrator | 2026-04-04 00:49:28.099704 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-04 00:49:28.099708 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:00.963) 0:00:09.167 ******** 2026-04-04 00:49:28.099712 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:28.099716 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:28.099725 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:28.099728 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.099732 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.099736 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.099740 | orchestrator | 2026-04-04 00:49:28.099744 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-04 00:49:28.099747 | orchestrator | Saturday 04 April 2026 00:48:30 +0000 (0:00:00.926) 0:00:10.094 ******** 2026-04-04 00:49:28.099760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099875 | orchestrator | 2026-04-04 00:49:28.099884 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-04 00:49:28.099891 | orchestrator | Saturday 04 April 2026 00:48:32 +0000 (0:00:02.031) 0:00:12.125 ******** 2026-04-04 00:49:28.099898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.099988 | orchestrator | 2026-04-04 00:49:28.099993 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-04 00:49:28.099997 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:02.914) 0:00:15.040 ******** 2026-04-04 00:49:28.100003 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:28.100012 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.100021 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:28.100027 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:28.100033 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.100040 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.100045 | orchestrator | 2026-04-04 00:49:28.100052 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-04 00:49:28.100059 | orchestrator | Saturday 04 April 2026 00:48:36 +0000 (0:00:01.085) 0:00:16.125 ******** 2026-04-04 00:49:28.100066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:49:28.100149 | orchestrator | 2026-04-04 00:49:28.100153 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-04 00:49:28.100158 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:03.375) 0:00:19.501 ******** 2026-04-04 00:49:28.100162 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:49:28.100167 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100171 | orchestrator | } 2026-04-04 00:49:28.100177 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:49:28.100184 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100190 | orchestrator | } 2026-04-04 00:49:28.100196 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:49:28.100207 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100212 | orchestrator | } 2026-04-04 00:49:28.100253 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 00:49:28.100258 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100263 | orchestrator | } 2026-04-04 00:49:28.100267 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 00:49:28.100272 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100276 | orchestrator | } 2026-04-04 00:49:28.100281 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 00:49:28.100285 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:49:28.100290 | orchestrator | } 2026-04-04 00:49:28.100294 | orchestrator | 2026-04-04 00:49:28.100299 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:49:28.100303 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.479) 0:00:19.980 ******** 2026-04-04 00:49:28.100308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100319 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:28.100326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:2026-04-04 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:28.100442 | orchestrator | /var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100461 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:28.100473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100487 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:28.100494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100511 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.100522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100557 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.100565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-04 00:49:28.100570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-04 00:49:28.100573 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.100577 | orchestrator | 2026-04-04 00:49:28.100581 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100585 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:01.735) 0:00:21.716 ******** 2026-04-04 00:49:28.100589 | orchestrator | 2026-04-04 00:49:28.100593 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100597 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.429) 0:00:22.145 ******** 2026-04-04 00:49:28.100600 | orchestrator | 2026-04-04 00:49:28.100604 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100608 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.110) 0:00:22.255 ******** 2026-04-04 00:49:28.100612 | orchestrator | 2026-04-04 00:49:28.100615 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100622 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.100) 0:00:22.355 ******** 2026-04-04 00:49:28.100626 | orchestrator | 2026-04-04 00:49:28.100629 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100633 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.101) 0:00:22.457 ******** 2026-04-04 00:49:28.100637 | orchestrator | 2026-04-04 00:49:28.100641 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:49:28.100644 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.123) 0:00:22.580 ******** 2026-04-04 00:49:28.100648 | orchestrator | 2026-04-04 00:49:28.100652 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-04 00:49:28.100656 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.179) 0:00:22.760 ******** 2026-04-04 00:49:28.100659 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:28.100663 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:28.100667 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:28.100671 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:28.100677 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:28.100681 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:28.100685 | orchestrator | 2026-04-04 00:49:28.100688 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-04 00:49:28.100695 | orchestrator | Saturday 04 April 2026 00:48:54 +0000 (0:00:11.751) 0:00:34.511 ******** 2026-04-04 00:49:28.100699 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:28.100703 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:28.100707 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:28.100711 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:28.100714 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:28.100718 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:28.100722 | orchestrator | 2026-04-04 00:49:28.100726 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-04 00:49:28.100729 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:01.550) 0:00:36.064 ******** 2026-04-04 00:49:28.100733 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:28.100737 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:28.100741 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:28.100744 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:28.100748 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:28.100752 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:28.100755 | orchestrator | 2026-04-04 00:49:28.100759 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-04 00:49:28.100763 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:08.932) 0:00:44.997 ******** 2026-04-04 00:49:28.100767 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-04 00:49:28.100771 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-04 00:49:28.100775 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-04 00:49:28.100778 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-04 00:49:28.100782 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-04 00:49:28.100786 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-04 00:49:28.100790 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-04 00:49:28.100794 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-04 00:49:28.100798 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-04 00:49:28.100801 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-04 00:49:28.100805 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-04 00:49:28.100809 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-04 00:49:28.100812 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100816 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100820 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100824 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100827 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100834 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:49:28.100837 | orchestrator | 2026-04-04 00:49:28.100841 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-04 00:49:28.100845 | orchestrator | Saturday 04 April 2026 00:49:10 +0000 (0:00:05.646) 0:00:50.643 ******** 2026-04-04 00:49:28.100849 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-04 00:49:28.100853 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.100857 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-04 00:49:28.100861 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.101126 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-04 00:49:28.101134 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.101139 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-04 00:49:28.101144 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-04 00:49:28.101148 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-04 00:49:28.101153 | orchestrator | 2026-04-04 00:49:28.101157 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-04 00:49:28.101162 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:02.437) 0:00:53.081 ******** 2026-04-04 00:49:28.101167 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:49:28.101171 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:28.101175 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:49:28.101180 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:28.101185 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:49:28.101189 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:28.101194 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:49:28.101201 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:49:28.101206 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:49:28.101210 | orchestrator | 2026-04-04 00:49:28.101214 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-04 00:49:28.101219 | orchestrator | Saturday 04 April 2026 00:49:16 +0000 (0:00:03.648) 0:00:56.729 ******** 2026-04-04 00:49:28.101223 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:28.101228 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:28.101232 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:28.101236 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:28.101241 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:28.101245 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:28.101249 | orchestrator | 2026-04-04 00:49:28.101254 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:49:28.101259 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:49:28.101263 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:49:28.101268 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:49:28.101272 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:49:28.101277 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:49:28.101281 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:49:28.101289 | orchestrator | 2026-04-04 00:49:28.101294 | orchestrator | 2026-04-04 00:49:28.101298 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:49:28.101302 | orchestrator | Saturday 04 April 2026 00:49:24 +0000 (0:00:07.973) 0:01:04.702 ******** 2026-04-04 00:49:28.101307 | orchestrator | =============================================================================== 2026-04-04 00:49:28.101312 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.91s 2026-04-04 00:49:28.101315 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.75s 2026-04-04 00:49:28.101319 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 5.65s 2026-04-04 00:49:28.101323 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.65s 2026-04-04 00:49:28.101326 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.38s 2026-04-04 00:49:28.101330 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.91s 2026-04-04 00:49:28.101334 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.44s 2026-04-04 00:49:28.101338 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.07s 2026-04-04 00:49:28.101341 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.04s 2026-04-04 00:49:28.101345 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.03s 2026-04-04 00:49:28.101349 | orchestrator | module-load : Load modules ---------------------------------------------- 1.85s 2026-04-04 00:49:28.101353 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.74s 2026-04-04 00:49:28.101357 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.55s 2026-04-04 00:49:28.101364 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.09s 2026-04-04 00:49:28.101368 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.04s 2026-04-04 00:49:28.101372 | orchestrator | module-load : Drop module persistence ----------------------------------- 0.96s 2026-04-04 00:49:28.101376 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.93s 2026-04-04 00:49:28.101380 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2026-04-04 00:49:28.101383 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-04-04 00:49:28.101387 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.48s 2026-04-04 00:49:31.143362 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:31.148748 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:31.149041 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:31.151097 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:31.151677 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:31.151721 | orchestrator | 2026-04-04 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:34.190167 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:34.190336 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:34.191181 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:34.191909 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:34.192475 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:34.192664 | orchestrator | 2026-04-04 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:37.223022 | orchestrator | 2026-04-04 00:49:37 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:37.223115 | orchestrator | 2026-04-04 00:49:37 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:37.223485 | orchestrator | 2026-04-04 00:49:37 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:37.225225 | orchestrator | 2026-04-04 00:49:37 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:37.225914 | orchestrator | 2026-04-04 00:49:37 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:37.225951 | orchestrator | 2026-04-04 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:40.271052 | orchestrator | 2026-04-04 00:49:40 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:40.271692 | orchestrator | 2026-04-04 00:49:40 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:40.272487 | orchestrator | 2026-04-04 00:49:40 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:40.273451 | orchestrator | 2026-04-04 00:49:40 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:40.274789 | orchestrator | 2026-04-04 00:49:40 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:40.274828 | orchestrator | 2026-04-04 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:43.572019 | orchestrator | 2026-04-04 00:49:43 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:43.572473 | orchestrator | 2026-04-04 00:49:43 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:43.573018 | orchestrator | 2026-04-04 00:49:43 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:43.573814 | orchestrator | 2026-04-04 00:49:43 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:43.577053 | orchestrator | 2026-04-04 00:49:43 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:43.577117 | orchestrator | 2026-04-04 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:46.609077 | orchestrator | 2026-04-04 00:49:46 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:46.610733 | orchestrator | 2026-04-04 00:49:46 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:46.610789 | orchestrator | 2026-04-04 00:49:46 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:46.610798 | orchestrator | 2026-04-04 00:49:46 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:46.611152 | orchestrator | 2026-04-04 00:49:46 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:46.612995 | orchestrator | 2026-04-04 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:49.898000 | orchestrator | 2026-04-04 00:49:49 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:49.898252 | orchestrator | 2026-04-04 00:49:49 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:49.898702 | orchestrator | 2026-04-04 00:49:49 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:49.899485 | orchestrator | 2026-04-04 00:49:49 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:49.900660 | orchestrator | 2026-04-04 00:49:49 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:49.900695 | orchestrator | 2026-04-04 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:52.925543 | orchestrator | 2026-04-04 00:49:52 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state STARTED 2026-04-04 00:49:52.926413 | orchestrator | 2026-04-04 00:49:52 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:52.927456 | orchestrator | 2026-04-04 00:49:52 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:52.927843 | orchestrator | 2026-04-04 00:49:52 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:52.928882 | orchestrator | 2026-04-04 00:49:52 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:52.928920 | orchestrator | 2026-04-04 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:55.964351 | orchestrator | 2026-04-04 00:49:55.964425 | orchestrator | 2026-04-04 00:49:55.964433 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-04 00:49:55.964437 | orchestrator | 2026-04-04 00:49:55.964442 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-04 00:49:55.964447 | orchestrator | Saturday 04 April 2026 00:45:32 +0000 (0:00:00.204) 0:00:00.204 ******** 2026-04-04 00:49:55.964452 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.964457 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.964461 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.964465 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.964468 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.964472 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.964476 | orchestrator | 2026-04-04 00:49:55.964480 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-04 00:49:55.964484 | orchestrator | Saturday 04 April 2026 00:45:33 +0000 (0:00:00.512) 0:00:00.717 ******** 2026-04-04 00:49:55.964487 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964492 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964496 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964500 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.964504 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.964507 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.964559 | orchestrator | 2026-04-04 00:49:55.964563 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-04 00:49:55.964569 | orchestrator | Saturday 04 April 2026 00:45:34 +0000 (0:00:00.668) 0:00:01.385 ******** 2026-04-04 00:49:55.964575 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964581 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964590 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964598 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.964604 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.964609 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.964615 | orchestrator | 2026-04-04 00:49:55.964621 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-04 00:49:55.964627 | orchestrator | Saturday 04 April 2026 00:45:34 +0000 (0:00:00.576) 0:00:01.962 ******** 2026-04-04 00:49:55.964634 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.964640 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.964646 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.964652 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.964659 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.964665 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.964689 | orchestrator | 2026-04-04 00:49:55.964694 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-04 00:49:55.964698 | orchestrator | Saturday 04 April 2026 00:45:36 +0000 (0:00:02.032) 0:00:03.994 ******** 2026-04-04 00:49:55.964702 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.964705 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.964709 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.964713 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.964717 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.964721 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.964724 | orchestrator | 2026-04-04 00:49:55.964728 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-04 00:49:55.964738 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:01.490) 0:00:05.485 ******** 2026-04-04 00:49:55.964742 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.964746 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.964749 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.964753 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.964757 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.964761 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.964765 | orchestrator | 2026-04-04 00:49:55.964768 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-04 00:49:55.964772 | orchestrator | Saturday 04 April 2026 00:45:39 +0000 (0:00:01.799) 0:00:07.284 ******** 2026-04-04 00:49:55.964776 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964780 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964784 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964787 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.964791 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.964795 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.964799 | orchestrator | 2026-04-04 00:49:55.964803 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-04 00:49:55.964807 | orchestrator | Saturday 04 April 2026 00:45:40 +0000 (0:00:00.965) 0:00:08.250 ******** 2026-04-04 00:49:55.964811 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964814 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964818 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964822 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.964826 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.964829 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.964833 | orchestrator | 2026-04-04 00:49:55.964837 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-04 00:49:55.964841 | orchestrator | Saturday 04 April 2026 00:45:41 +0000 (0:00:00.980) 0:00:09.231 ******** 2026-04-04 00:49:55.964845 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964849 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964852 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964856 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964860 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964864 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964868 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964871 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964875 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964879 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964927 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964933 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.964936 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964945 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964950 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.964956 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:49:55.964962 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:49:55.964969 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.964973 | orchestrator | 2026-04-04 00:49:55.964977 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-04 00:49:55.964982 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:00.916) 0:00:10.147 ******** 2026-04-04 00:49:55.964986 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.964991 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.964995 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.964999 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965004 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965008 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965012 | orchestrator | 2026-04-04 00:49:55.965017 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-04 00:49:55.965022 | orchestrator | Saturday 04 April 2026 00:45:44 +0000 (0:00:01.395) 0:00:11.543 ******** 2026-04-04 00:49:55.965027 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.965031 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.965036 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.965040 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965045 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965049 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965053 | orchestrator | 2026-04-04 00:49:55.965058 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-04 00:49:55.965062 | orchestrator | Saturday 04 April 2026 00:45:45 +0000 (0:00:00.962) 0:00:12.505 ******** 2026-04-04 00:49:55.965068 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.965074 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965083 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.965090 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.965096 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.965102 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.965108 | orchestrator | 2026-04-04 00:49:55.965115 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-04 00:49:55.965121 | orchestrator | Saturday 04 April 2026 00:45:51 +0000 (0:00:06.059) 0:00:18.565 ******** 2026-04-04 00:49:55.965128 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965134 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965141 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965146 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965151 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965155 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965160 | orchestrator | 2026-04-04 00:49:55.965165 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-04 00:49:55.965173 | orchestrator | Saturday 04 April 2026 00:45:51 +0000 (0:00:00.790) 0:00:19.355 ******** 2026-04-04 00:49:55.965177 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965181 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965185 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965189 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965192 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965196 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965200 | orchestrator | 2026-04-04 00:49:55.965204 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-04 00:49:55.965209 | orchestrator | Saturday 04 April 2026 00:45:54 +0000 (0:00:02.532) 0:00:21.888 ******** 2026-04-04 00:49:55.965218 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965222 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965225 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965229 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965233 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965237 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965240 | orchestrator | 2026-04-04 00:49:55.965244 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-04 00:49:55.965248 | orchestrator | Saturday 04 April 2026 00:45:55 +0000 (0:00:01.200) 0:00:23.088 ******** 2026-04-04 00:49:55.965252 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-04 00:49:55.965256 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-04 00:49:55.965260 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965264 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-04 00:49:55.965268 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-04 00:49:55.965272 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965275 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-04 00:49:55.965279 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-04 00:49:55.965283 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965287 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-04 00:49:55.965291 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-04 00:49:55.965294 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965298 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-04 00:49:55.965302 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-04 00:49:55.965306 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965309 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-04 00:49:55.965313 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-04 00:49:55.965317 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965321 | orchestrator | 2026-04-04 00:49:55.965324 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-04 00:49:55.965332 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:00.725) 0:00:23.814 ******** 2026-04-04 00:49:55.965336 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965339 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965343 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965347 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965350 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965354 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965358 | orchestrator | 2026-04-04 00:49:55.965362 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-04 00:49:55.965366 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.766) 0:00:24.581 ******** 2026-04-04 00:49:55.965369 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.965373 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.965377 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.965380 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965384 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965388 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965392 | orchestrator | 2026-04-04 00:49:55.965395 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-04 00:49:55.965399 | orchestrator | 2026-04-04 00:49:55.965403 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-04 00:49:55.965407 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:01.154) 0:00:25.736 ******** 2026-04-04 00:49:55.965410 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965414 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965418 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965422 | orchestrator | 2026-04-04 00:49:55.965425 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-04 00:49:55.965434 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:01.296) 0:00:27.032 ******** 2026-04-04 00:49:55.965437 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965441 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965445 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965449 | orchestrator | 2026-04-04 00:49:55.965452 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-04 00:49:55.965456 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:01.311) 0:00:28.344 ******** 2026-04-04 00:49:55.965460 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965464 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965467 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965471 | orchestrator | 2026-04-04 00:49:55.965475 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-04 00:49:55.965479 | orchestrator | Saturday 04 April 2026 00:46:01 +0000 (0:00:01.005) 0:00:29.349 ******** 2026-04-04 00:49:55.965482 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965486 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965490 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965493 | orchestrator | 2026-04-04 00:49:55.965497 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-04 00:49:55.965501 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:01.217) 0:00:30.566 ******** 2026-04-04 00:49:55.965505 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965528 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965535 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965541 | orchestrator | 2026-04-04 00:49:55.965553 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-04 00:49:55.965561 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:00.260) 0:00:30.826 ******** 2026-04-04 00:49:55.965567 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.965572 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.965578 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965584 | orchestrator | 2026-04-04 00:49:55.965589 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-04 00:49:55.965596 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:00.802) 0:00:31.629 ******** 2026-04-04 00:49:55.965602 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.965607 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.965613 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965619 | orchestrator | 2026-04-04 00:49:55.965626 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-04 00:49:55.965632 | orchestrator | Saturday 04 April 2026 00:46:05 +0000 (0:00:01.561) 0:00:33.191 ******** 2026-04-04 00:49:55.965637 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:55.965643 | orchestrator | 2026-04-04 00:49:55.965649 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-04 00:49:55.965656 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:00.806) 0:00:33.997 ******** 2026-04-04 00:49:55.965662 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965668 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965674 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965680 | orchestrator | 2026-04-04 00:49:55.965685 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-04 00:49:55.965693 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:02.536) 0:00:36.533 ******** 2026-04-04 00:49:55.965697 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965700 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965704 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965708 | orchestrator | 2026-04-04 00:49:55.965712 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-04 00:49:55.965716 | orchestrator | Saturday 04 April 2026 00:46:10 +0000 (0:00:00.838) 0:00:37.372 ******** 2026-04-04 00:49:55.965725 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965729 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965732 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965736 | orchestrator | 2026-04-04 00:49:55.965740 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-04 00:49:55.965744 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:01.149) 0:00:38.522 ******** 2026-04-04 00:49:55.965748 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965751 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965755 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965759 | orchestrator | 2026-04-04 00:49:55.965763 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-04 00:49:55.965771 | orchestrator | Saturday 04 April 2026 00:46:12 +0000 (0:00:01.517) 0:00:40.040 ******** 2026-04-04 00:49:55.965775 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965779 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965782 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965786 | orchestrator | 2026-04-04 00:49:55.965790 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-04 00:49:55.965794 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:00.469) 0:00:40.510 ******** 2026-04-04 00:49:55.965797 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965801 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965805 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965808 | orchestrator | 2026-04-04 00:49:55.965812 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-04 00:49:55.965816 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:00.701) 0:00:41.212 ******** 2026-04-04 00:49:55.965820 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.965824 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.965827 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.965831 | orchestrator | 2026-04-04 00:49:55.965835 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-04 00:49:55.965838 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:01.767) 0:00:42.979 ******** 2026-04-04 00:49:55.965842 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965846 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965850 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965854 | orchestrator | 2026-04-04 00:49:55.965857 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-04 00:49:55.965861 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:02.338) 0:00:45.317 ******** 2026-04-04 00:49:55.965865 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965869 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965872 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965876 | orchestrator | 2026-04-04 00:49:55.965880 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-04 00:49:55.965884 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:00.499) 0:00:45.817 ******** 2026-04-04 00:49:55.965888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:49:55.965892 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:49:55.965896 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:49:55.965900 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:49:55.965907 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:49:55.965911 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:49:55.965918 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:49:55.965922 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:49:55.965926 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:49:55.965930 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:49:55.965934 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:49:55.965937 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:49:55.965941 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:49:55.965945 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:49:55.965949 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:49:55.965953 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.965956 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.965960 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.965964 | orchestrator | 2026-04-04 00:49:55.965968 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-04 00:49:55.965972 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:54.118) 0:01:39.936 ******** 2026-04-04 00:49:55.965975 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.965979 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.965983 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.965987 | orchestrator | 2026-04-04 00:49:55.965990 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-04 00:49:55.965997 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:00.430) 0:01:40.366 ******** 2026-04-04 00:49:55.966001 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966004 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966008 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966069 | orchestrator | 2026-04-04 00:49:55.966075 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-04 00:49:55.966080 | orchestrator | Saturday 04 April 2026 00:47:14 +0000 (0:00:01.136) 0:01:41.503 ******** 2026-04-04 00:49:55.966086 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966092 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966100 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966108 | orchestrator | 2026-04-04 00:49:55.966115 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-04 00:49:55.966121 | orchestrator | Saturday 04 April 2026 00:47:15 +0000 (0:00:01.154) 0:01:42.658 ******** 2026-04-04 00:49:55.966128 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966133 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966139 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966144 | orchestrator | 2026-04-04 00:49:55.966151 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-04 00:49:55.966181 | orchestrator | Saturday 04 April 2026 00:47:40 +0000 (0:00:25.251) 0:02:07.909 ******** 2026-04-04 00:49:55.966187 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966193 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966199 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966205 | orchestrator | 2026-04-04 00:49:55.966218 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-04 00:49:55.966226 | orchestrator | Saturday 04 April 2026 00:47:41 +0000 (0:00:00.758) 0:02:08.668 ******** 2026-04-04 00:49:55.966230 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966233 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966237 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966241 | orchestrator | 2026-04-04 00:49:55.966245 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-04 00:49:55.966249 | orchestrator | Saturday 04 April 2026 00:47:42 +0000 (0:00:00.915) 0:02:09.583 ******** 2026-04-04 00:49:55.966253 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966257 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966260 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966264 | orchestrator | 2026-04-04 00:49:55.966268 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-04 00:49:55.966272 | orchestrator | Saturday 04 April 2026 00:47:42 +0000 (0:00:00.696) 0:02:10.280 ******** 2026-04-04 00:49:55.966276 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966279 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966283 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966287 | orchestrator | 2026-04-04 00:49:55.966291 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-04 00:49:55.966295 | orchestrator | Saturday 04 April 2026 00:47:43 +0000 (0:00:00.587) 0:02:10.867 ******** 2026-04-04 00:49:55.966299 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966303 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966306 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966310 | orchestrator | 2026-04-04 00:49:55.966314 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-04 00:49:55.966322 | orchestrator | Saturday 04 April 2026 00:47:43 +0000 (0:00:00.270) 0:02:11.138 ******** 2026-04-04 00:49:55.966326 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966330 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966333 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966337 | orchestrator | 2026-04-04 00:49:55.966341 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-04 00:49:55.966345 | orchestrator | Saturday 04 April 2026 00:47:44 +0000 (0:00:00.803) 0:02:11.942 ******** 2026-04-04 00:49:55.966349 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966353 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966356 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966360 | orchestrator | 2026-04-04 00:49:55.966364 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-04 00:49:55.966368 | orchestrator | Saturday 04 April 2026 00:47:45 +0000 (0:00:00.666) 0:02:12.608 ******** 2026-04-04 00:49:55.966372 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966375 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966379 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966383 | orchestrator | 2026-04-04 00:49:55.966387 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-04 00:49:55.966391 | orchestrator | Saturday 04 April 2026 00:47:46 +0000 (0:00:00.809) 0:02:13.418 ******** 2026-04-04 00:49:55.966394 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:55.966398 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:55.966402 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:55.966406 | orchestrator | 2026-04-04 00:49:55.966409 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-04 00:49:55.966413 | orchestrator | Saturday 04 April 2026 00:47:46 +0000 (0:00:00.767) 0:02:14.186 ******** 2026-04-04 00:49:55.966417 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.966421 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.966425 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.966429 | orchestrator | 2026-04-04 00:49:55.966433 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-04 00:49:55.966440 | orchestrator | Saturday 04 April 2026 00:47:47 +0000 (0:00:00.354) 0:02:14.540 ******** 2026-04-04 00:49:55.966444 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.966448 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.966452 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.966456 | orchestrator | 2026-04-04 00:49:55.966459 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-04 00:49:55.966463 | orchestrator | Saturday 04 April 2026 00:47:47 +0000 (0:00:00.257) 0:02:14.798 ******** 2026-04-04 00:49:55.966467 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966471 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966475 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966478 | orchestrator | 2026-04-04 00:49:55.966482 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-04 00:49:55.966486 | orchestrator | Saturday 04 April 2026 00:47:48 +0000 (0:00:00.637) 0:02:15.436 ******** 2026-04-04 00:49:55.966490 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.966499 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.966503 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.966507 | orchestrator | 2026-04-04 00:49:55.966526 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-04 00:49:55.966530 | orchestrator | Saturday 04 April 2026 00:47:48 +0000 (0:00:00.666) 0:02:16.103 ******** 2026-04-04 00:49:55.966534 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:49:55.966538 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:49:55.966542 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:49:55.966546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:49:55.966550 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:49:55.966554 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:49:55.966557 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:49:55.966561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:49:55.966567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:49:55.966574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-04 00:49:55.966582 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:49:55.966589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:49:55.966595 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:49:55.966601 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-04 00:49:55.966607 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:49:55.966614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:49:55.966619 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:49:55.966625 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:49:55.966683 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:49:55.966693 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:49:55.966700 | orchestrator | 2026-04-04 00:49:55.966706 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-04 00:49:55.966723 | orchestrator | 2026-04-04 00:49:55.966729 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-04 00:49:55.966735 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:03.450) 0:02:19.553 ******** 2026-04-04 00:49:55.966740 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.966747 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.966752 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.966758 | orchestrator | 2026-04-04 00:49:55.966764 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-04 00:49:55.966770 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.294) 0:02:19.848 ******** 2026-04-04 00:49:55.966776 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.966782 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.966788 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.966794 | orchestrator | 2026-04-04 00:49:55.966800 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-04 00:49:55.966806 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:00.615) 0:02:20.464 ******** 2026-04-04 00:49:55.966812 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.966818 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.966825 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.966831 | orchestrator | 2026-04-04 00:49:55.966837 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-04 00:49:55.966843 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:00.366) 0:02:20.830 ******** 2026-04-04 00:49:55.966849 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:49:55.966856 | orchestrator | 2026-04-04 00:49:55.966863 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-04 00:49:55.966869 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:00.421) 0:02:21.252 ******** 2026-04-04 00:49:55.966876 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.966884 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.966890 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.966897 | orchestrator | 2026-04-04 00:49:55.966904 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-04 00:49:55.966910 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:00.263) 0:02:21.516 ******** 2026-04-04 00:49:55.966917 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.966924 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.966931 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.966938 | orchestrator | 2026-04-04 00:49:55.966945 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-04 00:49:55.966961 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:00.360) 0:02:21.876 ******** 2026-04-04 00:49:55.966969 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.966976 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.966983 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.966990 | orchestrator | 2026-04-04 00:49:55.966997 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-04 00:49:55.967005 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:00.267) 0:02:22.144 ******** 2026-04-04 00:49:55.967012 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.967019 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.967026 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.967033 | orchestrator | 2026-04-04 00:49:55.967040 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-04 00:49:55.967048 | orchestrator | Saturday 04 April 2026 00:47:55 +0000 (0:00:00.582) 0:02:22.727 ******** 2026-04-04 00:49:55.967055 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.967062 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.967069 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.967076 | orchestrator | 2026-04-04 00:49:55.967083 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-04 00:49:55.967099 | orchestrator | Saturday 04 April 2026 00:47:56 +0000 (0:00:01.041) 0:02:23.768 ******** 2026-04-04 00:49:55.967107 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.967113 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.967120 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.967127 | orchestrator | 2026-04-04 00:49:55.967135 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-04 00:49:55.967143 | orchestrator | Saturday 04 April 2026 00:47:57 +0000 (0:00:01.551) 0:02:25.320 ******** 2026-04-04 00:49:55.967150 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:55.967157 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:55.967164 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:55.967171 | orchestrator | 2026-04-04 00:49:55.967178 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-04 00:49:55.967185 | orchestrator | 2026-04-04 00:49:55.967191 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-04 00:49:55.967198 | orchestrator | Saturday 04 April 2026 00:48:08 +0000 (0:00:10.471) 0:02:35.792 ******** 2026-04-04 00:49:55.967204 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967210 | orchestrator | 2026-04-04 00:49:55.967216 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-04 00:49:55.967222 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.716) 0:02:36.509 ******** 2026-04-04 00:49:55.967228 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967234 | orchestrator | 2026-04-04 00:49:55.967240 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:49:55.967247 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.385) 0:02:36.894 ******** 2026-04-04 00:49:55.967253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:49:55.967259 | orchestrator | 2026-04-04 00:49:55.967266 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:49:55.967272 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.453) 0:02:37.348 ******** 2026-04-04 00:49:55.967284 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967291 | orchestrator | 2026-04-04 00:49:55.967297 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-04 00:49:55.967303 | orchestrator | Saturday 04 April 2026 00:48:10 +0000 (0:00:00.821) 0:02:38.170 ******** 2026-04-04 00:49:55.967310 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967316 | orchestrator | 2026-04-04 00:49:55.967323 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-04 00:49:55.967329 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.477) 0:02:38.648 ******** 2026-04-04 00:49:55.967336 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:49:55.967343 | orchestrator | 2026-04-04 00:49:55.967349 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-04 00:49:55.967355 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:01.447) 0:02:40.096 ******** 2026-04-04 00:49:55.967362 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:49:55.967368 | orchestrator | 2026-04-04 00:49:55.967375 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-04 00:49:55.967381 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.797) 0:02:40.893 ******** 2026-04-04 00:49:55.967387 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967394 | orchestrator | 2026-04-04 00:49:55.967401 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-04 00:49:55.967407 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.365) 0:02:41.259 ******** 2026-04-04 00:49:55.967414 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967420 | orchestrator | 2026-04-04 00:49:55.967426 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-04 00:49:55.967433 | orchestrator | 2026-04-04 00:49:55.967439 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-04 00:49:55.967453 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.388) 0:02:41.648 ******** 2026-04-04 00:49:55.967459 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967465 | orchestrator | 2026-04-04 00:49:55.967470 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-04 00:49:55.967477 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.129) 0:02:41.777 ******** 2026-04-04 00:49:55.967483 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:49:55.967490 | orchestrator | 2026-04-04 00:49:55.967496 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-04 00:49:55.967502 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.222) 0:02:41.999 ******** 2026-04-04 00:49:55.967535 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967544 | orchestrator | 2026-04-04 00:49:55.967551 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-04 00:49:55.967557 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:01.009) 0:02:43.009 ******** 2026-04-04 00:49:55.967573 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967580 | orchestrator | 2026-04-04 00:49:55.967586 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-04 00:49:55.967592 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:01.278) 0:02:44.287 ******** 2026-04-04 00:49:55.967598 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967604 | orchestrator | 2026-04-04 00:49:55.967611 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-04 00:49:55.967615 | orchestrator | Saturday 04 April 2026 00:48:17 +0000 (0:00:00.778) 0:02:45.065 ******** 2026-04-04 00:49:55.967619 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967623 | orchestrator | 2026-04-04 00:49:55.967627 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-04 00:49:55.967633 | orchestrator | Saturday 04 April 2026 00:48:18 +0000 (0:00:00.385) 0:02:45.451 ******** 2026-04-04 00:49:55.967639 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967645 | orchestrator | 2026-04-04 00:49:55.967652 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-04 00:49:55.967658 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:06.907) 0:02:52.358 ******** 2026-04-04 00:49:55.967664 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.967670 | orchestrator | 2026-04-04 00:49:55.967676 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-04 00:49:55.967683 | orchestrator | Saturday 04 April 2026 00:48:36 +0000 (0:00:11.294) 0:03:03.653 ******** 2026-04-04 00:49:55.967689 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.967695 | orchestrator | 2026-04-04 00:49:55.967702 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-04 00:49:55.967708 | orchestrator | 2026-04-04 00:49:55.967715 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-04 00:49:55.967721 | orchestrator | Saturday 04 April 2026 00:48:36 +0000 (0:00:00.469) 0:03:04.123 ******** 2026-04-04 00:49:55.967727 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.967734 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.967738 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.967742 | orchestrator | 2026-04-04 00:49:55.967745 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-04 00:49:55.967749 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:00.481) 0:03:04.604 ******** 2026-04-04 00:49:55.967753 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967757 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.967761 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.967765 | orchestrator | 2026-04-04 00:49:55.967769 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-04 00:49:55.967772 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:00.300) 0:03:04.905 ******** 2026-04-04 00:49:55.967782 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:55.967786 | orchestrator | 2026-04-04 00:49:55.967790 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-04 00:49:55.967794 | orchestrator | Saturday 04 April 2026 00:48:38 +0000 (0:00:00.476) 0:03:05.382 ******** 2026-04-04 00:49:55.967803 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.967807 | orchestrator | 2026-04-04 00:49:55.967811 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-04 00:49:55.967815 | orchestrator | Saturday 04 April 2026 00:48:38 +0000 (0:00:00.815) 0:03:06.197 ******** 2026-04-04 00:49:55.967819 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.967823 | orchestrator | 2026-04-04 00:49:55.967827 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-04 00:49:55.967831 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:00.708) 0:03:06.905 ******** 2026-04-04 00:49:55.967835 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967839 | orchestrator | 2026-04-04 00:49:55.967843 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-04 00:49:55.967847 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:00.194) 0:03:07.100 ******** 2026-04-04 00:49:55.967851 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.967854 | orchestrator | 2026-04-04 00:49:55.967858 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-04 00:49:55.967862 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.785) 0:03:07.886 ******** 2026-04-04 00:49:55.967866 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967870 | orchestrator | 2026-04-04 00:49:55.967874 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-04 00:49:55.967878 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.099) 0:03:07.985 ******** 2026-04-04 00:49:55.967882 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967886 | orchestrator | 2026-04-04 00:49:55.967890 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-04 00:49:55.967894 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.083) 0:03:08.069 ******** 2026-04-04 00:49:55.967898 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967902 | orchestrator | 2026-04-04 00:49:55.967906 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-04 00:49:55.967910 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.082) 0:03:08.151 ******** 2026-04-04 00:49:55.967914 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.967918 | orchestrator | 2026-04-04 00:49:55.967921 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-04 00:49:55.967925 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.095) 0:03:08.247 ******** 2026-04-04 00:49:55.967929 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.967933 | orchestrator | 2026-04-04 00:49:55.967937 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-04 00:49:55.967941 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:04.592) 0:03:12.839 ******** 2026-04-04 00:49:55.967945 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-04 00:49:55.967952 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-04 00:49:55.967957 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-04 00:49:55.967961 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-04 00:49:55.967965 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-04 00:49:55.967969 | orchestrator | 2026-04-04 00:49:55.967973 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-04 00:49:55.967977 | orchestrator | Saturday 04 April 2026 00:49:28 +0000 (0:00:42.616) 0:03:55.455 ******** 2026-04-04 00:49:55.967984 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.967988 | orchestrator | 2026-04-04 00:49:55.967992 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-04 00:49:55.967996 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:01.264) 0:03:56.719 ******** 2026-04-04 00:49:55.968000 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.968004 | orchestrator | 2026-04-04 00:49:55.968008 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-04 00:49:55.968021 | orchestrator | Saturday 04 April 2026 00:49:30 +0000 (0:00:01.642) 0:03:58.362 ******** 2026-04-04 00:49:55.968025 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:49:55.968034 | orchestrator | 2026-04-04 00:49:55.968038 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-04 00:49:55.968042 | orchestrator | Saturday 04 April 2026 00:49:32 +0000 (0:00:01.148) 0:03:59.511 ******** 2026-04-04 00:49:55.968046 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.968050 | orchestrator | 2026-04-04 00:49:55.968053 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-04 00:49:55.968057 | orchestrator | Saturday 04 April 2026 00:49:32 +0000 (0:00:00.128) 0:03:59.639 ******** 2026-04-04 00:49:55.968061 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-04 00:49:55.968065 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-04 00:49:55.968069 | orchestrator | 2026-04-04 00:49:55.968073 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-04 00:49:55.968077 | orchestrator | Saturday 04 April 2026 00:49:34 +0000 (0:00:01.851) 0:04:01.490 ******** 2026-04-04 00:49:55.968080 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.968084 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.968088 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.968092 | orchestrator | 2026-04-04 00:49:55.968095 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-04 00:49:55.968099 | orchestrator | Saturday 04 April 2026 00:49:34 +0000 (0:00:00.213) 0:04:01.704 ******** 2026-04-04 00:49:55.968103 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.968107 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.968111 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.968116 | orchestrator | 2026-04-04 00:49:55.968122 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-04 00:49:55.968128 | orchestrator | 2026-04-04 00:49:55.968140 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-04 00:49:55.968149 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:00.960) 0:04:02.664 ******** 2026-04-04 00:49:55.968156 | orchestrator | ok: [testbed-manager] 2026-04-04 00:49:55.968162 | orchestrator | 2026-04-04 00:49:55.968168 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-04 00:49:55.968174 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:00.149) 0:04:02.814 ******** 2026-04-04 00:49:55.968180 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:49:55.968186 | orchestrator | 2026-04-04 00:49:55.968191 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-04 00:49:55.968197 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:00.297) 0:04:03.111 ******** 2026-04-04 00:49:55.968204 | orchestrator | changed: [testbed-manager] 2026-04-04 00:49:55.968210 | orchestrator | 2026-04-04 00:49:55.968216 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-04 00:49:55.968222 | orchestrator | 2026-04-04 00:49:55.968228 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-04 00:49:55.968234 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:04.897) 0:04:08.008 ******** 2026-04-04 00:49:55.968240 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:55.968246 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:55.968258 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:55.968264 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:55.968271 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:55.968276 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:55.968279 | orchestrator | 2026-04-04 00:49:55.968283 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-04 00:49:55.968288 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:00.508) 0:04:08.517 ******** 2026-04-04 00:49:55.968295 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:49:55.968300 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:49:55.968307 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:49:55.968313 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:49:55.968318 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:49:55.968324 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:49:55.968330 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:49:55.968336 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:49:55.968347 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:49:55.968353 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:49:55.968359 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:49:55.968364 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:49:55.968370 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:49:55.968376 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:49:55.968382 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:49:55.968388 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:49:55.968394 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:49:55.968400 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:49:55.968406 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:49:55.968412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:49:55.968418 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:49:55.968424 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:49:55.968431 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:49:55.968437 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:49:55.968443 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:49:55.968449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:49:55.968455 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:49:55.968461 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:49:55.968468 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:49:55.968472 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:49:55.968482 | orchestrator | 2026-04-04 00:49:55.968485 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-04 00:49:55.968494 | orchestrator | Saturday 04 April 2026 00:49:53 +0000 (0:00:12.850) 0:04:21.368 ******** 2026-04-04 00:49:55.968498 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.968502 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.968506 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.968528 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.968534 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.968538 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.968542 | orchestrator | 2026-04-04 00:49:55.968546 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-04 00:49:55.968549 | orchestrator | Saturday 04 April 2026 00:49:54 +0000 (0:00:00.413) 0:04:21.781 ******** 2026-04-04 00:49:55.968553 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:49:55.968557 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:49:55.968561 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:49:55.968564 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:55.968570 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:55.968576 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:55.968585 | orchestrator | 2026-04-04 00:49:55.968593 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:49:55.968598 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:49:55.968607 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-04 00:49:55.968613 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-04 00:49:55.968619 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-04 00:49:55.968625 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:49:55.968630 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:49:55.968636 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:49:55.968642 | orchestrator | 2026-04-04 00:49:55.968648 | orchestrator | 2026-04-04 00:49:55.968655 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:49:55.968666 | orchestrator | Saturday 04 April 2026 00:49:54 +0000 (0:00:00.435) 0:04:22.217 ******** 2026-04-04 00:49:55.968672 | orchestrator | =============================================================================== 2026-04-04 00:49:55.968683 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.12s 2026-04-04 00:49:55.968691 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.62s 2026-04-04 00:49:55.968697 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.25s 2026-04-04 00:49:55.968703 | orchestrator | Manage labels ---------------------------------------------------------- 12.85s 2026-04-04 00:49:55.968709 | orchestrator | kubectl : Install required packages ------------------------------------ 11.29s 2026-04-04 00:49:55.968715 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.47s 2026-04-04 00:49:55.968721 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.91s 2026-04-04 00:49:55.968725 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.06s 2026-04-04 00:49:55.968728 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.90s 2026-04-04 00:49:55.968738 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.59s 2026-04-04 00:49:55.968741 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.45s 2026-04-04 00:49:55.968745 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.54s 2026-04-04 00:49:55.968749 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.53s 2026-04-04 00:49:55.968753 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.34s 2026-04-04 00:49:55.968757 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.03s 2026-04-04 00:49:55.968761 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.85s 2026-04-04 00:49:55.968764 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.80s 2026-04-04 00:49:55.968768 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.77s 2026-04-04 00:49:55.968772 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.64s 2026-04-04 00:49:55.968776 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.56s 2026-04-04 00:49:55.968780 | orchestrator | 2026-04-04 00:49:55 | INFO  | Task ab7a468e-85c9-4525-a869-b1f5a6cd84d4 is in state SUCCESS 2026-04-04 00:49:55.968784 | orchestrator | 2026-04-04 00:49:55 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:55.968788 | orchestrator | 2026-04-04 00:49:55 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:55.968792 | orchestrator | 2026-04-04 00:49:55 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:55.968890 | orchestrator | 2026-04-04 00:49:55 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:55.968900 | orchestrator | 2026-04-04 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:58.998269 | orchestrator | 2026-04-04 00:49:58 | INFO  | Task bc2d7cb5-9053-4b08-88cc-df0b6c2d3dd7 is in state STARTED 2026-04-04 00:49:59.001661 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task a969590d-a770-4c87-8b0c-82f9e4ab0a90 is in state STARTED 2026-04-04 00:49:59.004327 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:49:59.005744 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:49:59.007267 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:49:59.010689 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:49:59.010749 | orchestrator | 2026-04-04 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:02.045468 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task bc2d7cb5-9053-4b08-88cc-df0b6c2d3dd7 is in state SUCCESS 2026-04-04 00:50:02.045601 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task a969590d-a770-4c87-8b0c-82f9e4ab0a90 is in state STARTED 2026-04-04 00:50:02.046384 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:02.047415 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:02.048614 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:02.049451 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:02.049526 | orchestrator | 2026-04-04 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:05.083904 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task a969590d-a770-4c87-8b0c-82f9e4ab0a90 is in state STARTED 2026-04-04 00:50:05.083998 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:05.084006 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:05.084557 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:05.085492 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:05.085531 | orchestrator | 2026-04-04 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:08.116377 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task a969590d-a770-4c87-8b0c-82f9e4ab0a90 is in state SUCCESS 2026-04-04 00:50:08.118250 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:08.120533 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:08.122288 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:08.124233 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:08.124280 | orchestrator | 2026-04-04 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:11.154658 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:11.155245 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:11.156585 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:11.159959 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:11.160022 | orchestrator | 2026-04-04 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:14.196977 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:14.199779 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:14.202530 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:14.204850 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:14.205235 | orchestrator | 2026-04-04 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:17.253911 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:17.254714 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:17.257801 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:17.260686 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:17.260743 | orchestrator | 2026-04-04 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:20.306136 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:20.306680 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:20.307414 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:20.308167 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:20.308197 | orchestrator | 2026-04-04 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:23.340265 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:23.342262 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:23.343182 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:23.345540 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:23.345588 | orchestrator | 2026-04-04 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:26.371885 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:26.372239 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:26.372854 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:26.374262 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:26.374302 | orchestrator | 2026-04-04 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:29.410684 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:29.410844 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:29.411701 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:29.412454 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:29.412514 | orchestrator | 2026-04-04 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:32.518217 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:32.518656 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:32.519109 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:32.521077 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:32.521118 | orchestrator | 2026-04-04 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:35.555299 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:35.555398 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:35.555898 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:35.558252 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:35.558338 | orchestrator | 2026-04-04 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:38.585068 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:38.586886 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:38.587312 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:38.590211 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:38.590245 | orchestrator | 2026-04-04 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:41.625352 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:41.627689 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:41.628858 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:41.630200 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:41.630261 | orchestrator | 2026-04-04 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:44.679298 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:44.680211 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:44.684675 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:44.685430 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:44.685688 | orchestrator | 2026-04-04 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:47.716960 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:47.719921 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:47.720918 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:47.722121 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:47.722149 | orchestrator | 2026-04-04 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:50.749590 | orchestrator | 2026-04-04 00:50:50 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:50.751363 | orchestrator | 2026-04-04 00:50:50 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:50.752159 | orchestrator | 2026-04-04 00:50:50 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:50.754342 | orchestrator | 2026-04-04 00:50:50 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:50.754395 | orchestrator | 2026-04-04 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:53.774630 | orchestrator | 2026-04-04 00:50:53 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:53.775558 | orchestrator | 2026-04-04 00:50:53 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:53.776268 | orchestrator | 2026-04-04 00:50:53 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:53.777253 | orchestrator | 2026-04-04 00:50:53 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:53.778830 | orchestrator | 2026-04-04 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:56.809398 | orchestrator | 2026-04-04 00:50:56 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:56.809902 | orchestrator | 2026-04-04 00:50:56 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:56.810830 | orchestrator | 2026-04-04 00:50:56 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:56.811520 | orchestrator | 2026-04-04 00:50:56 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:56.811563 | orchestrator | 2026-04-04 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:59.847114 | orchestrator | 2026-04-04 00:50:59 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:50:59.847597 | orchestrator | 2026-04-04 00:50:59 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:50:59.848407 | orchestrator | 2026-04-04 00:50:59 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:50:59.849300 | orchestrator | 2026-04-04 00:50:59 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:50:59.849357 | orchestrator | 2026-04-04 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:02.895629 | orchestrator | 2026-04-04 00:51:02 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:02.896046 | orchestrator | 2026-04-04 00:51:02 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:02.898365 | orchestrator | 2026-04-04 00:51:02 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:02.899046 | orchestrator | 2026-04-04 00:51:02 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:02.899233 | orchestrator | 2026-04-04 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:05.926371 | orchestrator | 2026-04-04 00:51:05 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:05.926524 | orchestrator | 2026-04-04 00:51:05 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:05.927155 | orchestrator | 2026-04-04 00:51:05 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:05.928064 | orchestrator | 2026-04-04 00:51:05 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:05.928118 | orchestrator | 2026-04-04 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:08.957796 | orchestrator | 2026-04-04 00:51:08 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:08.957963 | orchestrator | 2026-04-04 00:51:08 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:08.958799 | orchestrator | 2026-04-04 00:51:08 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:08.959311 | orchestrator | 2026-04-04 00:51:08 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:08.959349 | orchestrator | 2026-04-04 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:11.985793 | orchestrator | 2026-04-04 00:51:11 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:11.985966 | orchestrator | 2026-04-04 00:51:11 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:11.986577 | orchestrator | 2026-04-04 00:51:11 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:11.987527 | orchestrator | 2026-04-04 00:51:11 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:11.987568 | orchestrator | 2026-04-04 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:15.020248 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:15.020803 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:15.023554 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:15.023928 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:15.023960 | orchestrator | 2026-04-04 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:18.067053 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:18.068628 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:18.070272 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:18.071831 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:18.071871 | orchestrator | 2026-04-04 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:21.116671 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:21.116787 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:21.117213 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:21.117866 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:21.117895 | orchestrator | 2026-04-04 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:24.140047 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:24.142296 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:24.144675 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:24.146661 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:24.146788 | orchestrator | 2026-04-04 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:27.176402 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:27.177261 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:27.177880 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:27.178300 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:27.178434 | orchestrator | 2026-04-04 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:30.207968 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:30.208060 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:30.208722 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:30.209095 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:30.209125 | orchestrator | 2026-04-04 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:33.296894 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:33.299476 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:33.301030 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:33.302508 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:33.302552 | orchestrator | 2026-04-04 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:36.339484 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:36.340970 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:36.342930 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:36.344456 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:36.344656 | orchestrator | 2026-04-04 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:39.766473 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:39.766648 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:39.767466 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:39.768284 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:39.768389 | orchestrator | 2026-04-04 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:42.807340 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:42.807468 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:42.808125 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state STARTED 2026-04-04 00:51:42.809087 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:42.809127 | orchestrator | 2026-04-04 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:45.905962 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:45.906280 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:45.907161 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task 57efe5bd-2ba9-43b9-8af0-993421f42475 is in state SUCCESS 2026-04-04 00:51:45.908960 | orchestrator | 2026-04-04 00:51:45.909007 | orchestrator | 2026-04-04 00:51:45.909015 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-04 00:51:45.909022 | orchestrator | 2026-04-04 00:51:45.909027 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:51:45.909033 | orchestrator | Saturday 04 April 2026 00:49:58 +0000 (0:00:00.200) 0:00:00.200 ******** 2026-04-04 00:51:45.909039 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:51:45.909045 | orchestrator | 2026-04-04 00:51:45.909050 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:51:45.909055 | orchestrator | Saturday 04 April 2026 00:49:59 +0000 (0:00:01.064) 0:00:01.264 ******** 2026-04-04 00:51:45.909060 | orchestrator | changed: [testbed-manager] 2026-04-04 00:51:45.909065 | orchestrator | 2026-04-04 00:51:45.909071 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-04 00:51:45.909076 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:01.412) 0:00:02.677 ******** 2026-04-04 00:51:45.909081 | orchestrator | changed: [testbed-manager] 2026-04-04 00:51:45.909086 | orchestrator | 2026-04-04 00:51:45.909091 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:51:45.909096 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:51:45.909103 | orchestrator | 2026-04-04 00:51:45.909108 | orchestrator | 2026-04-04 00:51:45.909113 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:51:45.909118 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.398) 0:00:03.075 ******** 2026-04-04 00:51:45.909123 | orchestrator | =============================================================================== 2026-04-04 00:51:45.909127 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2026-04-04 00:51:45.909133 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.06s 2026-04-04 00:51:45.909137 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-04-04 00:51:45.909142 | orchestrator | 2026-04-04 00:51:45.909147 | orchestrator | 2026-04-04 00:51:45.909152 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-04 00:51:45.909157 | orchestrator | 2026-04-04 00:51:45.909162 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-04 00:51:45.909167 | orchestrator | Saturday 04 April 2026 00:49:58 +0000 (0:00:00.252) 0:00:00.252 ******** 2026-04-04 00:51:45.909185 | orchestrator | ok: [testbed-manager] 2026-04-04 00:51:45.909191 | orchestrator | 2026-04-04 00:51:45.909197 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-04 00:51:45.909201 | orchestrator | Saturday 04 April 2026 00:49:58 +0000 (0:00:00.797) 0:00:01.049 ******** 2026-04-04 00:51:45.909206 | orchestrator | ok: [testbed-manager] 2026-04-04 00:51:45.909211 | orchestrator | 2026-04-04 00:51:45.909216 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:51:45.909221 | orchestrator | Saturday 04 April 2026 00:49:59 +0000 (0:00:00.538) 0:00:01.587 ******** 2026-04-04 00:51:45.909226 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:51:45.909231 | orchestrator | 2026-04-04 00:51:45.909236 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:51:45.909240 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:01.032) 0:00:02.619 ******** 2026-04-04 00:51:45.909245 | orchestrator | changed: [testbed-manager] 2026-04-04 00:51:45.909250 | orchestrator | 2026-04-04 00:51:45.909255 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-04 00:51:45.909260 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.954) 0:00:03.574 ******** 2026-04-04 00:51:45.909265 | orchestrator | changed: [testbed-manager] 2026-04-04 00:51:45.909270 | orchestrator | 2026-04-04 00:51:45.909366 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-04 00:51:45.909374 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.503) 0:00:04.077 ******** 2026-04-04 00:51:45.909379 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:51:45.909384 | orchestrator | 2026-04-04 00:51:45.909389 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-04 00:51:45.909394 | orchestrator | Saturday 04 April 2026 00:50:03 +0000 (0:00:01.521) 0:00:05.598 ******** 2026-04-04 00:51:45.909399 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:51:45.909404 | orchestrator | 2026-04-04 00:51:45.909419 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-04 00:51:45.909424 | orchestrator | Saturday 04 April 2026 00:50:04 +0000 (0:00:00.781) 0:00:06.380 ******** 2026-04-04 00:51:45.909429 | orchestrator | ok: [testbed-manager] 2026-04-04 00:51:45.909434 | orchestrator | 2026-04-04 00:51:45.909439 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-04 00:51:45.909444 | orchestrator | Saturday 04 April 2026 00:50:04 +0000 (0:00:00.397) 0:00:06.778 ******** 2026-04-04 00:51:45.909449 | orchestrator | ok: [testbed-manager] 2026-04-04 00:51:45.909453 | orchestrator | 2026-04-04 00:51:45.909458 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:51:45.909463 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:51:45.909468 | orchestrator | 2026-04-04 00:51:45.909473 | orchestrator | 2026-04-04 00:51:45.909478 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:51:45.909483 | orchestrator | Saturday 04 April 2026 00:50:04 +0000 (0:00:00.307) 0:00:07.086 ******** 2026-04-04 00:51:45.909488 | orchestrator | =============================================================================== 2026-04-04 00:51:45.909493 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2026-04-04 00:51:45.909498 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.03s 2026-04-04 00:51:45.909503 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.95s 2026-04-04 00:51:45.909519 | orchestrator | Get home directory of operator user ------------------------------------- 0.80s 2026-04-04 00:51:45.909524 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-04-04 00:51:45.909529 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2026-04-04 00:51:45.909534 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2026-04-04 00:51:45.909539 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2026-04-04 00:51:45.909545 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2026-04-04 00:51:45.909550 | orchestrator | 2026-04-04 00:51:45.909556 | orchestrator | 2026-04-04 00:51:45.909561 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-04 00:51:45.909567 | orchestrator | 2026-04-04 00:51:45.909572 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-04 00:51:45.909578 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.126) 0:00:00.126 ******** 2026-04-04 00:51:45.909584 | orchestrator | ok: [localhost] => { 2026-04-04 00:51:45.909591 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-04 00:51:45.909596 | orchestrator | } 2026-04-04 00:51:45.909602 | orchestrator | 2026-04-04 00:51:45.909608 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-04 00:51:45.909614 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.063) 0:00:00.189 ******** 2026-04-04 00:51:45.909620 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-04 00:51:45.909633 | orchestrator | ...ignoring 2026-04-04 00:51:45.909639 | orchestrator | 2026-04-04 00:51:45.909645 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-04 00:51:45.909650 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:03.087) 0:00:03.277 ******** 2026-04-04 00:51:45.909656 | orchestrator | skipping: [localhost] 2026-04-04 00:51:45.909662 | orchestrator | 2026-04-04 00:51:45.909668 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-04 00:51:45.909674 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.254) 0:00:03.532 ******** 2026-04-04 00:51:45.909680 | orchestrator | ok: [localhost] 2026-04-04 00:51:45.909685 | orchestrator | 2026-04-04 00:51:45.909691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:51:45.909697 | orchestrator | 2026-04-04 00:51:45.909702 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:51:45.909708 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.651) 0:00:04.183 ******** 2026-04-04 00:51:45.909714 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:45.909720 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:45.909726 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:45.909734 | orchestrator | 2026-04-04 00:51:45.909742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:51:45.909750 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.608) 0:00:04.793 ******** 2026-04-04 00:51:45.909758 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-04 00:51:45.909768 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-04 00:51:45.909775 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-04 00:51:45.909783 | orchestrator | 2026-04-04 00:51:45.909790 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-04 00:51:45.909798 | orchestrator | 2026-04-04 00:51:45.909806 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:51:45.909813 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.656) 0:00:05.450 ******** 2026-04-04 00:51:45.909822 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:45.909830 | orchestrator | 2026-04-04 00:51:45.909838 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-04 00:51:45.909846 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.629) 0:00:06.080 ******** 2026-04-04 00:51:45.909853 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:45.909862 | orchestrator | 2026-04-04 00:51:45.909871 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-04 00:51:45.909885 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:01.386) 0:00:07.467 ******** 2026-04-04 00:51:45.909894 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.909903 | orchestrator | 2026-04-04 00:51:45.909912 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-04 00:51:45.909920 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.251) 0:00:07.719 ******** 2026-04-04 00:51:45.909926 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.909932 | orchestrator | 2026-04-04 00:51:45.909938 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-04 00:51:45.909944 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.233) 0:00:07.952 ******** 2026-04-04 00:51:45.909950 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.909955 | orchestrator | 2026-04-04 00:51:45.909959 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-04 00:51:45.909964 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.255) 0:00:08.208 ******** 2026-04-04 00:51:45.909969 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.909974 | orchestrator | 2026-04-04 00:51:45.909979 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:51:45.909984 | orchestrator | Saturday 04 April 2026 00:48:50 +0000 (0:00:00.285) 0:00:08.493 ******** 2026-04-04 00:51:45.909996 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:45.910001 | orchestrator | 2026-04-04 00:51:45.910007 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-04 00:51:45.910062 | orchestrator | Saturday 04 April 2026 00:48:50 +0000 (0:00:00.445) 0:00:08.938 ******** 2026-04-04 00:51:45.910069 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:45.910075 | orchestrator | 2026-04-04 00:51:45.910079 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-04 00:51:45.910106 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:00.792) 0:00:09.731 ******** 2026-04-04 00:51:45.910112 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.910117 | orchestrator | 2026-04-04 00:51:45.910122 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-04 00:51:45.910134 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:00.533) 0:00:10.265 ******** 2026-04-04 00:51:45.910139 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.910144 | orchestrator | 2026-04-04 00:51:45.910149 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-04 00:51:45.910154 | orchestrator | Saturday 04 April 2026 00:48:52 +0000 (0:00:00.342) 0:00:10.607 ******** 2026-04-04 00:51:45.910163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910195 | orchestrator | 2026-04-04 00:51:45.910200 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-04 00:51:45.910205 | orchestrator | Saturday 04 April 2026 00:48:53 +0000 (0:00:01.140) 0:00:11.748 ******** 2026-04-04 00:51:45.910215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910235 | orchestrator | 2026-04-04 00:51:45.910240 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-04 00:51:45.910251 | orchestrator | Saturday 04 April 2026 00:48:55 +0000 (0:00:01.882) 0:00:13.632 ******** 2026-04-04 00:51:45.910260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:51:45.910268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:51:45.910275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:51:45.910298 | orchestrator | 2026-04-04 00:51:45.910307 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-04 00:51:45.910315 | orchestrator | Saturday 04 April 2026 00:48:57 +0000 (0:00:01.876) 0:00:15.509 ******** 2026-04-04 00:51:45.910323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:51:45.910331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:51:45.910339 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:51:45.910347 | orchestrator | 2026-04-04 00:51:45.910354 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-04 00:51:45.910367 | orchestrator | Saturday 04 April 2026 00:48:59 +0000 (0:00:02.106) 0:00:17.615 ******** 2026-04-04 00:51:45.910375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:51:45.910383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:51:45.910391 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:51:45.910399 | orchestrator | 2026-04-04 00:51:45.910408 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-04 00:51:45.910413 | orchestrator | Saturday 04 April 2026 00:49:00 +0000 (0:00:01.095) 0:00:18.711 ******** 2026-04-04 00:51:45.910418 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:51:45.910423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:51:45.910428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:51:45.910433 | orchestrator | 2026-04-04 00:51:45.910438 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-04 00:51:45.910443 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:01.142) 0:00:19.853 ******** 2026-04-04 00:51:45.910448 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:51:45.910456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:51:45.910463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:51:45.910471 | orchestrator | 2026-04-04 00:51:45.910478 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-04 00:51:45.910487 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:01.129) 0:00:20.982 ******** 2026-04-04 00:51:45.910494 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:51:45.910501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:51:45.910509 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:51:45.910518 | orchestrator | 2026-04-04 00:51:45.910526 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:51:45.910534 | orchestrator | Saturday 04 April 2026 00:49:03 +0000 (0:00:01.173) 0:00:22.156 ******** 2026-04-04 00:51:45.910541 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:45.910552 | orchestrator | 2026-04-04 00:51:45.910557 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-04 00:51:45.910562 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:00.398) 0:00:22.554 ******** 2026-04-04 00:51:45.910571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910593 | orchestrator | 2026-04-04 00:51:45.910598 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-04 00:51:45.910603 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.988) 0:00:23.542 ******** 2026-04-04 00:51:45.910608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910618 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.910626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910632 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:45.910642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910647 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:45.910652 | orchestrator | 2026-04-04 00:51:45.910657 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-04 00:51:45.910662 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.470) 0:00:24.012 ******** 2026-04-04 00:51:45.910667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910680 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.910686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910691 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:45.910699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910704 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:45.910709 | orchestrator | 2026-04-04 00:51:45.910714 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-04 00:51:45.910723 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.730) 0:00:24.743 ******** 2026-04-04 00:51:45.910728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:45.910752 | orchestrator | 2026-04-04 00:51:45.910757 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-04 00:51:45.910762 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.974) 0:00:25.718 ******** 2026-04-04 00:51:45.910767 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:51:45.910772 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:51:45.910777 | orchestrator | } 2026-04-04 00:51:45.910784 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:51:45.910792 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:51:45.910799 | orchestrator | } 2026-04-04 00:51:45.910806 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:51:45.910814 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:51:45.910821 | orchestrator | } 2026-04-04 00:51:45.910828 | orchestrator | 2026-04-04 00:51:45.910836 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:51:45.910844 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.304) 0:00:26.022 ******** 2026-04-04 00:51:45.910859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910875 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.910882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910887 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:45.910893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:45.910899 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:45.910903 | orchestrator | 2026-04-04 00:51:45.910908 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-04 00:51:45.910914 | orchestrator | Saturday 04 April 2026 00:49:08 +0000 (0:00:00.722) 0:00:26.745 ******** 2026-04-04 00:51:45.910919 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:45.910924 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:45.910929 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:45.910933 | orchestrator | 2026-04-04 00:51:45.910939 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-04 00:51:45.910944 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:00.765) 0:00:27.511 ******** 2026-04-04 00:51:45.910949 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:45.910953 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:45.910959 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:45.910963 | orchestrator | 2026-04-04 00:51:45.910968 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-04 00:51:45.910973 | orchestrator | Saturday 04 April 2026 00:49:17 +0000 (0:00:08.757) 0:00:36.268 ******** 2026-04-04 00:51:45.910978 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:45.910983 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:45.910988 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:45.910993 | orchestrator | 2026-04-04 00:51:45.910998 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:51:45.911003 | orchestrator | 2026-04-04 00:51:45.911012 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:51:45.911020 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:00.576) 0:00:36.845 ******** 2026-04-04 00:51:45.911025 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:45.911030 | orchestrator | 2026-04-04 00:51:45.911034 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:51:45.911039 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.749) 0:00:37.595 ******** 2026-04-04 00:51:45.911044 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:45.911049 | orchestrator | 2026-04-04 00:51:45.911054 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:51:45.911058 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.087) 0:00:37.683 ******** 2026-04-04 00:51:45.911063 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:45.911068 | orchestrator | 2026-04-04 00:51:45.911072 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:51:45.911077 | orchestrator | Saturday 04 April 2026 00:49:21 +0000 (0:00:01.666) 0:00:39.349 ******** 2026-04-04 00:51:45.911082 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:45.911087 | orchestrator | 2026-04-04 00:51:45.911092 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:51:45.911097 | orchestrator | 2026-04-04 00:51:45.911101 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:51:45.911106 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:01:52.399) 0:02:31.749 ******** 2026-04-04 00:51:45.911111 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:45.911116 | orchestrator | 2026-04-04 00:51:45.911120 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:51:45.911125 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.703) 0:02:32.452 ******** 2026-04-04 00:51:45.911130 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:45.911135 | orchestrator | 2026-04-04 00:51:45.911140 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:51:45.911144 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.083) 0:02:32.536 ******** 2026-04-04 00:51:45.911149 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:45.911154 | orchestrator | 2026-04-04 00:51:45.911159 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:51:45.911164 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:01.507) 0:02:34.044 ******** 2026-04-04 00:51:45.911169 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:45.911173 | orchestrator | 2026-04-04 00:51:45.911178 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:51:45.911183 | orchestrator | 2026-04-04 00:51:45.911188 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:51:45.911193 | orchestrator | Saturday 04 April 2026 00:51:27 +0000 (0:00:11.344) 0:02:45.388 ******** 2026-04-04 00:51:45.911197 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:45.911202 | orchestrator | 2026-04-04 00:51:45.911207 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:51:45.911212 | orchestrator | Saturday 04 April 2026 00:51:27 +0000 (0:00:00.769) 0:02:46.158 ******** 2026-04-04 00:51:45.911241 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:45.911247 | orchestrator | 2026-04-04 00:51:45.911251 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:51:45.911256 | orchestrator | Saturday 04 April 2026 00:51:28 +0000 (0:00:00.172) 0:02:46.330 ******** 2026-04-04 00:51:45.911261 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:45.911266 | orchestrator | 2026-04-04 00:51:45.911270 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:51:45.911275 | orchestrator | Saturday 04 April 2026 00:51:29 +0000 (0:00:01.748) 0:02:48.079 ******** 2026-04-04 00:51:45.911280 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:45.911308 | orchestrator | 2026-04-04 00:51:45.911318 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-04 00:51:45.911329 | orchestrator | 2026-04-04 00:51:45.911335 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-04 00:51:45.911340 | orchestrator | Saturday 04 April 2026 00:51:40 +0000 (0:00:10.787) 0:02:58.867 ******** 2026-04-04 00:51:45.911345 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:45.911350 | orchestrator | 2026-04-04 00:51:45.911355 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-04 00:51:45.911360 | orchestrator | Saturday 04 April 2026 00:51:41 +0000 (0:00:00.982) 0:02:59.849 ******** 2026-04-04 00:51:45.911365 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:45.911369 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:45.911378 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:45.911383 | orchestrator | 2026-04-04 00:51:45.911388 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:51:45.911393 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-04 00:51:45.911400 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-04 00:51:45.911405 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:51:45.911410 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:51:45.911415 | orchestrator | 2026-04-04 00:51:45.911420 | orchestrator | 2026-04-04 00:51:45.911425 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:51:45.911430 | orchestrator | Saturday 04 April 2026 00:51:44 +0000 (0:00:03.414) 0:03:03.264 ******** 2026-04-04 00:51:45.911435 | orchestrator | =============================================================================== 2026-04-04 00:51:45.911440 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 134.53s 2026-04-04 00:51:45.911449 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.76s 2026-04-04 00:51:45.911454 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.92s 2026-04-04 00:51:45.911459 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.41s 2026-04-04 00:51:45.911464 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.09s 2026-04-04 00:51:45.911469 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.22s 2026-04-04 00:51:45.911474 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.11s 2026-04-04 00:51:45.911479 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.88s 2026-04-04 00:51:45.911483 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.88s 2026-04-04 00:51:45.911488 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.39s 2026-04-04 00:51:45.911493 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.17s 2026-04-04 00:51:45.911498 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.14s 2026-04-04 00:51:45.911503 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.14s 2026-04-04 00:51:45.911507 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.13s 2026-04-04 00:51:45.911512 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.10s 2026-04-04 00:51:45.911517 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 0.99s 2026-04-04 00:51:45.911522 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.98s 2026-04-04 00:51:45.911527 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 0.97s 2026-04-04 00:51:45.911537 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.79s 2026-04-04 00:51:45.911542 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.77s 2026-04-04 00:51:45.911659 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:45.911669 | orchestrator | 2026-04-04 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:48.940101 | orchestrator | 2026-04-04 00:51:48 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:48.941004 | orchestrator | 2026-04-04 00:51:48 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:48.941331 | orchestrator | 2026-04-04 00:51:48 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:48.941494 | orchestrator | 2026-04-04 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:51.972333 | orchestrator | 2026-04-04 00:51:51 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:51.973733 | orchestrator | 2026-04-04 00:51:51 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:51.975230 | orchestrator | 2026-04-04 00:51:51 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:51.975309 | orchestrator | 2026-04-04 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:55.000097 | orchestrator | 2026-04-04 00:51:54 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:55.000972 | orchestrator | 2026-04-04 00:51:55 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:55.001657 | orchestrator | 2026-04-04 00:51:55 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:55.001752 | orchestrator | 2026-04-04 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:58.046096 | orchestrator | 2026-04-04 00:51:58 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:51:58.047244 | orchestrator | 2026-04-04 00:51:58 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:51:58.048632 | orchestrator | 2026-04-04 00:51:58 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:51:58.049121 | orchestrator | 2026-04-04 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:01.088174 | orchestrator | 2026-04-04 00:52:01 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:01.090436 | orchestrator | 2026-04-04 00:52:01 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:01.091746 | orchestrator | 2026-04-04 00:52:01 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:01.091810 | orchestrator | 2026-04-04 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:04.120655 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:04.121980 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:04.122903 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:04.122947 | orchestrator | 2026-04-04 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:07.152379 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:07.153841 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:07.155919 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:07.155993 | orchestrator | 2026-04-04 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:10.177445 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:10.178605 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:10.179981 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:10.180134 | orchestrator | 2026-04-04 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:13.207954 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:13.208029 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:13.208807 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:13.208833 | orchestrator | 2026-04-04 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:16.259947 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:16.260172 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:16.261098 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:16.261153 | orchestrator | 2026-04-04 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:19.285736 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:19.285961 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:19.286470 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:19.287390 | orchestrator | 2026-04-04 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:22.307738 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:22.307899 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:22.308770 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:22.308838 | orchestrator | 2026-04-04 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:25.328786 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:25.330107 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:25.330464 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:25.330479 | orchestrator | 2026-04-04 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:28.352700 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:28.354817 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:28.355604 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:28.355648 | orchestrator | 2026-04-04 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:31.378651 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:31.379059 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:31.379871 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:31.379898 | orchestrator | 2026-04-04 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:34.404153 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:34.406596 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:34.408809 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:34.408893 | orchestrator | 2026-04-04 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:37.432703 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:37.433378 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:37.434442 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:37.434469 | orchestrator | 2026-04-04 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:40.465814 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:40.466461 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:40.467125 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:40.467357 | orchestrator | 2026-04-04 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:43.506320 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:43.506468 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:43.507050 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:43.507082 | orchestrator | 2026-04-04 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:46.537715 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:46.538305 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:46.538999 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:46.539039 | orchestrator | 2026-04-04 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:49.569544 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:49.569820 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:49.570763 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:49.570875 | orchestrator | 2026-04-04 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:52.597105 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:52.597786 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:52.598736 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:52.598782 | orchestrator | 2026-04-04 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:55.630773 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:55.632724 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:55.634708 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:55.634774 | orchestrator | 2026-04-04 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:58.675954 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:52:58.678775 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:52:58.680811 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:52:58.680864 | orchestrator | 2026-04-04 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:01.723149 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:01.723483 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:01.724400 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:53:01.724476 | orchestrator | 2026-04-04 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:04.757701 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:04.758214 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:04.758880 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state STARTED 2026-04-04 00:53:04.758905 | orchestrator | 2026-04-04 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:07.797251 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:07.798609 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:07.801969 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task 1f52482f-2304-4381-8e9e-0cabca941446 is in state SUCCESS 2026-04-04 00:53:07.803467 | orchestrator | 2026-04-04 00:53:07.803534 | orchestrator | 2026-04-04 00:53:07.803551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:53:07.803568 | orchestrator | 2026-04-04 00:53:07.803582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:53:07.803596 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-04-04 00:53:07.803721 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.803742 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.803775 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.803784 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:07.803792 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:07.803832 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:07.803845 | orchestrator | 2026-04-04 00:53:07.803856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:53:07.803864 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.599) 0:00:00.850 ******** 2026-04-04 00:53:07.803872 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-04 00:53:07.803880 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-04 00:53:07.803888 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-04 00:53:07.803896 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-04 00:53:07.803904 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-04 00:53:07.803912 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-04 00:53:07.803920 | orchestrator | 2026-04-04 00:53:07.803928 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-04 00:53:07.803936 | orchestrator | 2026-04-04 00:53:07.803944 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-04 00:53:07.803952 | orchestrator | Saturday 04 April 2026 00:49:31 +0000 (0:00:01.594) 0:00:02.445 ******** 2026-04-04 00:53:07.803970 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:07.803992 | orchestrator | 2026-04-04 00:53:07.804020 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-04 00:53:07.804033 | orchestrator | Saturday 04 April 2026 00:49:32 +0000 (0:00:01.401) 0:00:03.847 ******** 2026-04-04 00:53:07.804049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804224 | orchestrator | 2026-04-04 00:53:07.804250 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-04 00:53:07.804259 | orchestrator | Saturday 04 April 2026 00:49:34 +0000 (0:00:01.756) 0:00:05.604 ******** 2026-04-04 00:53:07.804268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804323 | orchestrator | 2026-04-04 00:53:07.804331 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-04 00:53:07.804402 | orchestrator | Saturday 04 April 2026 00:49:36 +0000 (0:00:01.633) 0:00:07.237 ******** 2026-04-04 00:53:07.804411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804474 | orchestrator | 2026-04-04 00:53:07.804487 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-04 00:53:07.804495 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:00.918) 0:00:08.155 ******** 2026-04-04 00:53:07.804503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804558 | orchestrator | 2026-04-04 00:53:07.804571 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-04 00:53:07.804579 | orchestrator | Saturday 04 April 2026 00:49:38 +0000 (0:00:01.364) 0:00:09.520 ******** 2026-04-04 00:53:07.804587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.804647 | orchestrator | 2026-04-04 00:53:07.804655 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-04 00:53:07.804664 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:01.520) 0:00:11.041 ******** 2026-04-04 00:53:07.804672 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:53:07.804680 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804689 | orchestrator | } 2026-04-04 00:53:07.804697 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:53:07.804705 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804713 | orchestrator | } 2026-04-04 00:53:07.804721 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:53:07.804729 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804737 | orchestrator | } 2026-04-04 00:53:07.804745 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 00:53:07.804752 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804760 | orchestrator | } 2026-04-04 00:53:07.804768 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 00:53:07.804776 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804784 | orchestrator | } 2026-04-04 00:53:07.804792 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 00:53:07.804800 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.804808 | orchestrator | } 2026-04-04 00:53:07.804816 | orchestrator | 2026-04-04 00:53:07.804824 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:53:07.804832 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:00.569) 0:00:11.610 ******** 2026-04-04 00:53:07.804841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804849 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.804862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804871 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.804879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804887 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.804895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804908 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:07.804917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804930 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:07.804939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.804947 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:07.804955 | orchestrator | 2026-04-04 00:53:07.804963 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-04 00:53:07.804971 | orchestrator | Saturday 04 April 2026 00:49:42 +0000 (0:00:01.731) 0:00:13.342 ******** 2026-04-04 00:53:07.804981 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.804994 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.805013 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.805027 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:07.805039 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:07.805051 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:07.805063 | orchestrator | 2026-04-04 00:53:07.805076 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-04 00:53:07.805119 | orchestrator | Saturday 04 April 2026 00:49:45 +0000 (0:00:03.419) 0:00:16.761 ******** 2026-04-04 00:53:07.805133 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-04 00:53:07.805156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-04 00:53:07.805170 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-04 00:53:07.805195 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-04 00:53:07.805208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-04 00:53:07.805220 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-04 00:53:07.805234 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805276 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805300 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:53:07.805309 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805336 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805344 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-04 00:53:07.805370 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805391 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805439 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:53:07.805453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805494 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805507 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805521 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:53:07.805530 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:53:07.805554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805562 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805570 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:53:07.805578 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:53:07.805587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:53:07.805595 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-04 00:53:07.805604 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:53:07.805612 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:53:07.805620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:53:07.805629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-04 00:53:07.805638 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-04 00:53:07.805646 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:53:07.805662 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-04 00:53:07.805676 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-04 00:53:07.805685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-04 00:53:07.805693 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:53:07.805701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:53:07.805709 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:53:07.805717 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:53:07.805725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:53:07.805733 | orchestrator | 2026-04-04 00:53:07.805741 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805749 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:22.272) 0:00:39.034 ******** 2026-04-04 00:53:07.805757 | orchestrator | 2026-04-04 00:53:07.805765 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805773 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.156) 0:00:39.191 ******** 2026-04-04 00:53:07.805781 | orchestrator | 2026-04-04 00:53:07.805789 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805797 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.059) 0:00:39.250 ******** 2026-04-04 00:53:07.805804 | orchestrator | 2026-04-04 00:53:07.805823 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805832 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.060) 0:00:39.311 ******** 2026-04-04 00:53:07.805840 | orchestrator | 2026-04-04 00:53:07.805848 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805856 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.057) 0:00:39.368 ******** 2026-04-04 00:53:07.805864 | orchestrator | 2026-04-04 00:53:07.805872 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:53:07.805880 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.057) 0:00:39.425 ******** 2026-04-04 00:53:07.805888 | orchestrator | 2026-04-04 00:53:07.805896 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-04 00:53:07.805904 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:00.064) 0:00:39.490 ******** 2026-04-04 00:53:07.805912 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:07.805920 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.805928 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.805936 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:07.805944 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.805952 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:07.805960 | orchestrator | 2026-04-04 00:53:07.805968 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-04 00:53:07.805976 | orchestrator | Saturday 04 April 2026 00:50:10 +0000 (0:00:01.491) 0:00:40.982 ******** 2026-04-04 00:53:07.805985 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.805993 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.806001 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:07.806009 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.806074 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:07.806153 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:07.806169 | orchestrator | 2026-04-04 00:53:07.806183 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-04 00:53:07.806200 | orchestrator | 2026-04-04 00:53:07.806209 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:53:07.806224 | orchestrator | Saturday 04 April 2026 00:50:18 +0000 (0:00:08.244) 0:00:49.226 ******** 2026-04-04 00:53:07.806237 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:07.806251 | orchestrator | 2026-04-04 00:53:07.806264 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:53:07.806277 | orchestrator | Saturday 04 April 2026 00:50:18 +0000 (0:00:00.674) 0:00:49.901 ******** 2026-04-04 00:53:07.806290 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:07.806304 | orchestrator | 2026-04-04 00:53:07.806318 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-04 00:53:07.806330 | orchestrator | Saturday 04 April 2026 00:50:19 +0000 (0:00:00.559) 0:00:50.460 ******** 2026-04-04 00:53:07.806342 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.806355 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.806368 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.806382 | orchestrator | 2026-04-04 00:53:07.806394 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-04 00:53:07.806408 | orchestrator | Saturday 04 April 2026 00:50:20 +0000 (0:00:01.091) 0:00:51.552 ******** 2026-04-04 00:53:07.806422 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.806434 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.806448 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.806462 | orchestrator | 2026-04-04 00:53:07.806475 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-04 00:53:07.806488 | orchestrator | Saturday 04 April 2026 00:50:20 +0000 (0:00:00.333) 0:00:51.885 ******** 2026-04-04 00:53:07.806502 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.806515 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.806529 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.806542 | orchestrator | 2026-04-04 00:53:07.806556 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-04 00:53:07.806580 | orchestrator | Saturday 04 April 2026 00:50:21 +0000 (0:00:00.344) 0:00:52.230 ******** 2026-04-04 00:53:07.806590 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.806598 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.806606 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.806614 | orchestrator | 2026-04-04 00:53:07.806623 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-04 00:53:07.806631 | orchestrator | Saturday 04 April 2026 00:50:21 +0000 (0:00:00.269) 0:00:52.500 ******** 2026-04-04 00:53:07.806639 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.806647 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.806655 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.806663 | orchestrator | 2026-04-04 00:53:07.806671 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-04 00:53:07.806679 | orchestrator | Saturday 04 April 2026 00:50:21 +0000 (0:00:00.368) 0:00:52.868 ******** 2026-04-04 00:53:07.806687 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806695 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806704 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806712 | orchestrator | 2026-04-04 00:53:07.806719 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-04 00:53:07.806728 | orchestrator | Saturday 04 April 2026 00:50:22 +0000 (0:00:00.227) 0:00:53.096 ******** 2026-04-04 00:53:07.806735 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806744 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806752 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806760 | orchestrator | 2026-04-04 00:53:07.806768 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-04 00:53:07.806784 | orchestrator | Saturday 04 April 2026 00:50:22 +0000 (0:00:00.243) 0:00:53.339 ******** 2026-04-04 00:53:07.806792 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806800 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806816 | orchestrator | 2026-04-04 00:53:07.806825 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-04 00:53:07.806838 | orchestrator | Saturday 04 April 2026 00:50:22 +0000 (0:00:00.252) 0:00:53.592 ******** 2026-04-04 00:53:07.806846 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806854 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806862 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806870 | orchestrator | 2026-04-04 00:53:07.806879 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-04 00:53:07.806887 | orchestrator | Saturday 04 April 2026 00:50:22 +0000 (0:00:00.229) 0:00:53.821 ******** 2026-04-04 00:53:07.806894 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806902 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806911 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806918 | orchestrator | 2026-04-04 00:53:07.806927 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-04 00:53:07.806935 | orchestrator | Saturday 04 April 2026 00:50:23 +0000 (0:00:00.383) 0:00:54.204 ******** 2026-04-04 00:53:07.806943 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.806951 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.806959 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.806967 | orchestrator | 2026-04-04 00:53:07.806976 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-04 00:53:07.806984 | orchestrator | Saturday 04 April 2026 00:50:23 +0000 (0:00:00.229) 0:00:54.434 ******** 2026-04-04 00:53:07.806992 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807000 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807008 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807015 | orchestrator | 2026-04-04 00:53:07.807024 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-04 00:53:07.807032 | orchestrator | Saturday 04 April 2026 00:50:23 +0000 (0:00:00.219) 0:00:54.653 ******** 2026-04-04 00:53:07.807040 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807048 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807056 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807064 | orchestrator | 2026-04-04 00:53:07.807072 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-04 00:53:07.807103 | orchestrator | Saturday 04 April 2026 00:50:23 +0000 (0:00:00.232) 0:00:54.885 ******** 2026-04-04 00:53:07.807113 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807121 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807129 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807137 | orchestrator | 2026-04-04 00:53:07.807145 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-04 00:53:07.807153 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:00.344) 0:00:55.230 ******** 2026-04-04 00:53:07.807161 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807169 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807177 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807185 | orchestrator | 2026-04-04 00:53:07.807193 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-04 00:53:07.807201 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:00.245) 0:00:55.475 ******** 2026-04-04 00:53:07.807208 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807216 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807224 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807232 | orchestrator | 2026-04-04 00:53:07.807240 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-04 00:53:07.807248 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:00.247) 0:00:55.723 ******** 2026-04-04 00:53:07.807263 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807271 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807280 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807288 | orchestrator | 2026-04-04 00:53:07.807296 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:53:07.807304 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:00.244) 0:00:55.967 ******** 2026-04-04 00:53:07.807313 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:07.807321 | orchestrator | 2026-04-04 00:53:07.807335 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-04 00:53:07.807343 | orchestrator | Saturday 04 April 2026 00:50:25 +0000 (0:00:00.664) 0:00:56.632 ******** 2026-04-04 00:53:07.807351 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.807359 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.807373 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.807392 | orchestrator | 2026-04-04 00:53:07.807407 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-04 00:53:07.807420 | orchestrator | Saturday 04 April 2026 00:50:26 +0000 (0:00:00.540) 0:00:57.172 ******** 2026-04-04 00:53:07.807432 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.807445 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.807458 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.807470 | orchestrator | 2026-04-04 00:53:07.807483 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-04 00:53:07.807496 | orchestrator | Saturday 04 April 2026 00:50:26 +0000 (0:00:00.521) 0:00:57.694 ******** 2026-04-04 00:53:07.807508 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807521 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807536 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807549 | orchestrator | 2026-04-04 00:53:07.807563 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-04 00:53:07.807578 | orchestrator | Saturday 04 April 2026 00:50:27 +0000 (0:00:00.481) 0:00:58.176 ******** 2026-04-04 00:53:07.807591 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807605 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807620 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807633 | orchestrator | 2026-04-04 00:53:07.807647 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-04 00:53:07.807659 | orchestrator | Saturday 04 April 2026 00:50:27 +0000 (0:00:00.304) 0:00:58.480 ******** 2026-04-04 00:53:07.807672 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807686 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807701 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807715 | orchestrator | 2026-04-04 00:53:07.807739 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-04 00:53:07.807753 | orchestrator | Saturday 04 April 2026 00:50:27 +0000 (0:00:00.297) 0:00:58.778 ******** 2026-04-04 00:53:07.807773 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807782 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807790 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807798 | orchestrator | 2026-04-04 00:53:07.807806 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-04 00:53:07.807813 | orchestrator | Saturday 04 April 2026 00:50:28 +0000 (0:00:00.307) 0:00:59.085 ******** 2026-04-04 00:53:07.807821 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807830 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807838 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807846 | orchestrator | 2026-04-04 00:53:07.807855 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-04 00:53:07.807863 | orchestrator | Saturday 04 April 2026 00:50:28 +0000 (0:00:00.474) 0:00:59.560 ******** 2026-04-04 00:53:07.807871 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.807889 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.807897 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.807904 | orchestrator | 2026-04-04 00:53:07.807913 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-04 00:53:07.807920 | orchestrator | Saturday 04 April 2026 00:50:28 +0000 (0:00:00.295) 0:00:59.855 ******** 2026-04-04 00:53:07.807931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.807945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.807954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.807972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.807982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.807996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808107 | orchestrator | 2026-04-04 00:53:07.808117 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-04 00:53:07.808126 | orchestrator | Saturday 04 April 2026 00:50:31 +0000 (0:00:03.013) 0:01:02.869 ******** 2026-04-04 00:53:07.808134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808270 | orchestrator | 2026-04-04 00:53:07.808278 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-04 00:53:07.808286 | orchestrator | Saturday 04 April 2026 00:50:37 +0000 (0:00:05.961) 0:01:08.831 ******** 2026-04-04 00:53:07.808295 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-04 00:53:07.808303 | orchestrator | 2026-04-04 00:53:07.808312 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-04 00:53:07.808320 | orchestrator | Saturday 04 April 2026 00:50:38 +0000 (0:00:00.690) 0:01:09.521 ******** 2026-04-04 00:53:07.808328 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.808336 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.808344 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.808352 | orchestrator | 2026-04-04 00:53:07.808361 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-04 00:53:07.808369 | orchestrator | Saturday 04 April 2026 00:50:39 +0000 (0:00:00.716) 0:01:10.237 ******** 2026-04-04 00:53:07.808376 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.808385 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.808393 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.808401 | orchestrator | 2026-04-04 00:53:07.808410 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-04 00:53:07.808417 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:02.259) 0:01:12.497 ******** 2026-04-04 00:53:07.808425 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.808433 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.808441 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.808449 | orchestrator | 2026-04-04 00:53:07.808457 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-04 00:53:07.808466 | orchestrator | Saturday 04 April 2026 00:50:43 +0000 (0:00:01.922) 0:01:14.420 ******** 2026-04-04 00:53:07.808491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808738 | orchestrator | 2026-04-04 00:53:07.808747 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-04 00:53:07.808756 | orchestrator | Saturday 04 April 2026 00:50:47 +0000 (0:00:04.511) 0:01:18.932 ******** 2026-04-04 00:53:07.808764 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:53:07.808773 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.808781 | orchestrator | } 2026-04-04 00:53:07.808789 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:53:07.808797 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.808805 | orchestrator | } 2026-04-04 00:53:07.808813 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:53:07.808821 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.808829 | orchestrator | } 2026-04-04 00:53:07.808837 | orchestrator | 2026-04-04 00:53:07.808845 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:53:07.808853 | orchestrator | Saturday 04 April 2026 00:50:48 +0000 (0:00:00.398) 0:01:19.330 ******** 2026-04-04 00:53:07.808862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.808957 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-2, testbed-node-1, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.808965 | orchestrator | 2026-04-04 00:53:07.808973 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-04 00:53:07.808987 | orchestrator | Saturday 04 April 2026 00:50:50 +0000 (0:00:02.252) 0:01:21.582 ******** 2026-04-04 00:53:07.808995 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-04 00:53:07.809004 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-04 00:53:07.809011 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-04 00:53:07.809019 | orchestrator | 2026-04-04 00:53:07.809027 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-04 00:53:07.809035 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:19.482) 0:01:41.065 ******** 2026-04-04 00:53:07.809044 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:53:07.809051 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.809059 | orchestrator | } 2026-04-04 00:53:07.809067 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:53:07.809076 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.809118 | orchestrator | } 2026-04-04 00:53:07.809127 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:53:07.809135 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.809143 | orchestrator | } 2026-04-04 00:53:07.809151 | orchestrator | 2026-04-04 00:53:07.809169 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.809177 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:00.426) 0:01:41.491 ******** 2026-04-04 00:53:07.809185 | orchestrator | 2026-04-04 00:53:07.809194 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.809202 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:00.050) 0:01:41.541 ******** 2026-04-04 00:53:07.809210 | orchestrator | 2026-04-04 00:53:07.809218 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.809226 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:00.051) 0:01:41.593 ******** 2026-04-04 00:53:07.809234 | orchestrator | 2026-04-04 00:53:07.809242 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-04 00:53:07.809250 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:00.051) 0:01:41.644 ******** 2026-04-04 00:53:07.809258 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.809266 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.809274 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.809283 | orchestrator | 2026-04-04 00:53:07.809291 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-04 00:53:07.809299 | orchestrator | Saturday 04 April 2026 00:51:18 +0000 (0:00:07.494) 0:01:49.139 ******** 2026-04-04 00:53:07.809307 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.809315 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.809323 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.809331 | orchestrator | 2026-04-04 00:53:07.809340 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-04 00:53:07.809348 | orchestrator | Saturday 04 April 2026 00:51:25 +0000 (0:00:07.659) 0:01:56.799 ******** 2026-04-04 00:53:07.809356 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-04 00:53:07.809364 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-04 00:53:07.809372 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-04 00:53:07.809380 | orchestrator | 2026-04-04 00:53:07.809393 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-04 00:53:07.809401 | orchestrator | Saturday 04 April 2026 00:51:39 +0000 (0:00:13.193) 0:02:09.992 ******** 2026-04-04 00:53:07.809409 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.809417 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.809425 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.809433 | orchestrator | 2026-04-04 00:53:07.809441 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-04 00:53:07.809449 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:10.887) 0:02:20.880 ******** 2026-04-04 00:53:07.809457 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.809522 | orchestrator | 2026-04-04 00:53:07.809531 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-04 00:53:07.809540 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:00.106) 0:02:20.986 ******** 2026-04-04 00:53:07.809548 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.809556 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.809564 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.809572 | orchestrator | 2026-04-04 00:53:07.809579 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-04 00:53:07.809588 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:00.866) 0:02:21.853 ******** 2026-04-04 00:53:07.809595 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.809603 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.809611 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.809619 | orchestrator | 2026-04-04 00:53:07.809627 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-04 00:53:07.809635 | orchestrator | Saturday 04 April 2026 00:51:51 +0000 (0:00:00.615) 0:02:22.468 ******** 2026-04-04 00:53:07.809643 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.809651 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.809659 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.809667 | orchestrator | 2026-04-04 00:53:07.809675 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-04 00:53:07.809683 | orchestrator | Saturday 04 April 2026 00:51:52 +0000 (0:00:00.703) 0:02:23.171 ******** 2026-04-04 00:53:07.809691 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.809699 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.809707 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.809715 | orchestrator | 2026-04-04 00:53:07.809723 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-04 00:53:07.809730 | orchestrator | Saturday 04 April 2026 00:51:52 +0000 (0:00:00.601) 0:02:23.772 ******** 2026-04-04 00:53:07.809738 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.809746 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.809755 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.809762 | orchestrator | 2026-04-04 00:53:07.809770 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-04 00:53:07.809778 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.956) 0:02:24.728 ******** 2026-04-04 00:53:07.809786 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.809794 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.809802 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.809810 | orchestrator | 2026-04-04 00:53:07.809818 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-04 00:53:07.809826 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.813) 0:02:25.542 ******** 2026-04-04 00:53:07.809834 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-04 00:53:07.809842 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-04 00:53:07.809850 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-04 00:53:07.809858 | orchestrator | 2026-04-04 00:53:07.809866 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-04 00:53:07.809874 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.939) 0:02:26.482 ******** 2026-04-04 00:53:07.809882 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.809890 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.809898 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.809906 | orchestrator | 2026-04-04 00:53:07.809914 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-04 00:53:07.809927 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.263) 0:02:26.745 ******** 2026-04-04 00:53:07.809936 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.809952 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.809965 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.809974 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.809983 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.809992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810000 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810176 | orchestrator | 2026-04-04 00:53:07.810185 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-04 00:53:07.810193 | orchestrator | Saturday 04 April 2026 00:51:58 +0000 (0:00:02.693) 0:02:29.438 ******** 2026-04-04 00:53:07.810201 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810210 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810219 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810272 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810333 | orchestrator | 2026-04-04 00:53:07.810342 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-04 00:53:07.810350 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:05.023) 0:02:34.462 ******** 2026-04-04 00:53:07.810359 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-04 00:53:07.810367 | orchestrator | 2026-04-04 00:53:07.810375 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-04 00:53:07.810383 | orchestrator | Saturday 04 April 2026 00:52:04 +0000 (0:00:00.607) 0:02:35.070 ******** 2026-04-04 00:53:07.810392 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.810400 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.810408 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.810415 | orchestrator | 2026-04-04 00:53:07.810424 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-04 00:53:07.810433 | orchestrator | Saturday 04 April 2026 00:52:04 +0000 (0:00:00.640) 0:02:35.710 ******** 2026-04-04 00:53:07.810441 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.810449 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.810457 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.810465 | orchestrator | 2026-04-04 00:53:07.810473 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-04 00:53:07.810481 | orchestrator | Saturday 04 April 2026 00:52:06 +0000 (0:00:01.518) 0:02:37.229 ******** 2026-04-04 00:53:07.810490 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.810498 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.810506 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.810514 | orchestrator | 2026-04-04 00:53:07.810523 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-04 00:53:07.810531 | orchestrator | Saturday 04 April 2026 00:52:07 +0000 (0:00:01.577) 0:02:38.806 ******** 2026-04-04 00:53:07.810542 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810550 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810580 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810656 | orchestrator | 2026-04-04 00:53:07.810663 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-04 00:53:07.810670 | orchestrator | Saturday 04 April 2026 00:52:12 +0000 (0:00:04.171) 0:02:42.978 ******** 2026-04-04 00:53:07.810677 | orchestrator | ok: [testbed-node-0] => { 2026-04-04 00:53:07.810685 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810692 | orchestrator | } 2026-04-04 00:53:07.810699 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:53:07.810705 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810712 | orchestrator | } 2026-04-04 00:53:07.810719 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:53:07.810727 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810733 | orchestrator | } 2026-04-04 00:53:07.810740 | orchestrator | 2026-04-04 00:53:07.810747 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:53:07.810754 | orchestrator | Saturday 04 April 2026 00:52:12 +0000 (0:00:00.328) 0:02:43.306 ******** 2026-04-04 00:53:07.810768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:53:07.810854 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:53:07.810861 | orchestrator | 2026-04-04 00:53:07.810868 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-04 00:53:07.810876 | orchestrator | Saturday 04 April 2026 00:52:14 +0000 (0:00:01.929) 0:02:45.236 ******** 2026-04-04 00:53:07.810882 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-04 00:53:07.810889 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-04 00:53:07.810897 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-04 00:53:07.810903 | orchestrator | 2026-04-04 00:53:07.810910 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-04 00:53:07.810921 | orchestrator | Saturday 04 April 2026 00:52:34 +0000 (0:00:20.484) 0:03:05.720 ******** 2026-04-04 00:53:07.810928 | orchestrator | ok: [testbed-node-0] => { 2026-04-04 00:53:07.810934 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810941 | orchestrator | } 2026-04-04 00:53:07.810948 | orchestrator | ok: [testbed-node-1] => { 2026-04-04 00:53:07.810960 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810968 | orchestrator | } 2026-04-04 00:53:07.810975 | orchestrator | ok: [testbed-node-2] => { 2026-04-04 00:53:07.810982 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:53:07.810989 | orchestrator | } 2026-04-04 00:53:07.810996 | orchestrator | 2026-04-04 00:53:07.811003 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.811010 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.538) 0:03:06.259 ******** 2026-04-04 00:53:07.811016 | orchestrator | 2026-04-04 00:53:07.811023 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.811030 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.048) 0:03:06.307 ******** 2026-04-04 00:53:07.811037 | orchestrator | 2026-04-04 00:53:07.811043 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:53:07.811051 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.047) 0:03:06.354 ******** 2026-04-04 00:53:07.811058 | orchestrator | 2026-04-04 00:53:07.811065 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-04 00:53:07.811072 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.070) 0:03:06.424 ******** 2026-04-04 00:53:07.811090 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.811097 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.811104 | orchestrator | 2026-04-04 00:53:07.811111 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-04 00:53:07.811118 | orchestrator | Saturday 04 April 2026 00:52:47 +0000 (0:00:11.925) 0:03:18.350 ******** 2026-04-04 00:53:07.811124 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:07.811131 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:07.811138 | orchestrator | 2026-04-04 00:53:07.811145 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-04 00:53:07.811152 | orchestrator | Saturday 04 April 2026 00:52:59 +0000 (0:00:11.870) 0:03:30.221 ******** 2026-04-04 00:53:07.811159 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:07.811165 | orchestrator | 2026-04-04 00:53:07.811172 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-04 00:53:07.811179 | orchestrator | Saturday 04 April 2026 00:52:59 +0000 (0:00:00.129) 0:03:30.350 ******** 2026-04-04 00:53:07.811186 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.811193 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.811200 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.811206 | orchestrator | 2026-04-04 00:53:07.811214 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-04 00:53:07.811220 | orchestrator | Saturday 04 April 2026 00:53:00 +0000 (0:00:00.764) 0:03:31.114 ******** 2026-04-04 00:53:07.811227 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.811234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.811241 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.811248 | orchestrator | 2026-04-04 00:53:07.811255 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-04 00:53:07.811262 | orchestrator | Saturday 04 April 2026 00:53:00 +0000 (0:00:00.672) 0:03:31.787 ******** 2026-04-04 00:53:07.811268 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.811275 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.811283 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.811289 | orchestrator | 2026-04-04 00:53:07.811296 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-04 00:53:07.811303 | orchestrator | Saturday 04 April 2026 00:53:01 +0000 (0:00:00.807) 0:03:32.595 ******** 2026-04-04 00:53:07.811310 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:07.811317 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:07.811323 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:07.811330 | orchestrator | 2026-04-04 00:53:07.811337 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-04 00:53:07.811344 | orchestrator | Saturday 04 April 2026 00:53:02 +0000 (0:00:00.758) 0:03:33.354 ******** 2026-04-04 00:53:07.811357 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.811370 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.811377 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.811384 | orchestrator | 2026-04-04 00:53:07.811391 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-04 00:53:07.811398 | orchestrator | Saturday 04 April 2026 00:53:03 +0000 (0:00:00.984) 0:03:34.339 ******** 2026-04-04 00:53:07.811405 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:07.811411 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:07.811418 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:07.811425 | orchestrator | 2026-04-04 00:53:07.811432 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-04 00:53:07.811439 | orchestrator | Saturday 04 April 2026 00:53:04 +0000 (0:00:00.992) 0:03:35.332 ******** 2026-04-04 00:53:07.811446 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-04 00:53:07.811453 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-04 00:53:07.811459 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-04 00:53:07.811466 | orchestrator | 2026-04-04 00:53:07.811473 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:53:07.811480 | orchestrator | testbed-node-0 : ok=64  changed=26  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-04 00:53:07.811488 | orchestrator | testbed-node-1 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-04 00:53:07.811495 | orchestrator | testbed-node-2 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-04 00:53:07.811502 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:53:07.811513 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:53:07.811520 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:53:07.811527 | orchestrator | 2026-04-04 00:53:07.811534 | orchestrator | 2026-04-04 00:53:07.811540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:53:07.811548 | orchestrator | Saturday 04 April 2026 00:53:05 +0000 (0:00:01.078) 0:03:36.410 ******** 2026-04-04 00:53:07.811554 | orchestrator | =============================================================================== 2026-04-04 00:53:07.811561 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.27s 2026-04-04 00:53:07.811568 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 20.48s 2026-04-04 00:53:07.811575 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 19.53s 2026-04-04 00:53:07.811582 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 19.48s 2026-04-04 00:53:07.811588 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 19.42s 2026-04-04 00:53:07.811595 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 13.19s 2026-04-04 00:53:07.811602 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 10.89s 2026-04-04 00:53:07.811609 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.24s 2026-04-04 00:53:07.811616 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.96s 2026-04-04 00:53:07.811622 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.02s 2026-04-04 00:53:07.811629 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.51s 2026-04-04 00:53:07.811636 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.17s 2026-04-04 00:53:07.811648 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.42s 2026-04-04 00:53:07.811655 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.01s 2026-04-04 00:53:07.811662 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.69s 2026-04-04 00:53:07.811669 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.26s 2026-04-04 00:53:07.811675 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.25s 2026-04-04 00:53:07.811682 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.93s 2026-04-04 00:53:07.811689 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.92s 2026-04-04 00:53:07.811696 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.76s 2026-04-04 00:53:07.811703 | orchestrator | 2026-04-04 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:10.849482 | orchestrator | 2026-04-04 00:53:10 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:10.850328 | orchestrator | 2026-04-04 00:53:10 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:10.850987 | orchestrator | 2026-04-04 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:13.884630 | orchestrator | 2026-04-04 00:53:13 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:13.886420 | orchestrator | 2026-04-04 00:53:13 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:13.886474 | orchestrator | 2026-04-04 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:16.919122 | orchestrator | 2026-04-04 00:53:16 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:16.919211 | orchestrator | 2026-04-04 00:53:16 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:16.919220 | orchestrator | 2026-04-04 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:19.973858 | orchestrator | 2026-04-04 00:53:19 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:19.977069 | orchestrator | 2026-04-04 00:53:19 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:19.977305 | orchestrator | 2026-04-04 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:23.026607 | orchestrator | 2026-04-04 00:53:23 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:23.027277 | orchestrator | 2026-04-04 00:53:23 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:23.027501 | orchestrator | 2026-04-04 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:26.068711 | orchestrator | 2026-04-04 00:53:26 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:26.069842 | orchestrator | 2026-04-04 00:53:26 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:26.070674 | orchestrator | 2026-04-04 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:29.113760 | orchestrator | 2026-04-04 00:53:29 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:29.113822 | orchestrator | 2026-04-04 00:53:29 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:29.113832 | orchestrator | 2026-04-04 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:32.144930 | orchestrator | 2026-04-04 00:53:32 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:32.145183 | orchestrator | 2026-04-04 00:53:32 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:32.145200 | orchestrator | 2026-04-04 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:35.169875 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:35.171782 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:35.171824 | orchestrator | 2026-04-04 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:38.201580 | orchestrator | 2026-04-04 00:53:38 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:38.201633 | orchestrator | 2026-04-04 00:53:38 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:38.201638 | orchestrator | 2026-04-04 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:41.237381 | orchestrator | 2026-04-04 00:53:41 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:41.238497 | orchestrator | 2026-04-04 00:53:41 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:41.238675 | orchestrator | 2026-04-04 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:44.280500 | orchestrator | 2026-04-04 00:53:44 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:44.282256 | orchestrator | 2026-04-04 00:53:44 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:44.282305 | orchestrator | 2026-04-04 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:47.315181 | orchestrator | 2026-04-04 00:53:47 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:47.315668 | orchestrator | 2026-04-04 00:53:47 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:47.315715 | orchestrator | 2026-04-04 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:50.348850 | orchestrator | 2026-04-04 00:53:50 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:50.351599 | orchestrator | 2026-04-04 00:53:50 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:50.351666 | orchestrator | 2026-04-04 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:53.390720 | orchestrator | 2026-04-04 00:53:53 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:53.390783 | orchestrator | 2026-04-04 00:53:53 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:53.390792 | orchestrator | 2026-04-04 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:56.428307 | orchestrator | 2026-04-04 00:53:56 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:56.430922 | orchestrator | 2026-04-04 00:53:56 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:56.431075 | orchestrator | 2026-04-04 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:59.467505 | orchestrator | 2026-04-04 00:53:59 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:53:59.468397 | orchestrator | 2026-04-04 00:53:59 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:53:59.468434 | orchestrator | 2026-04-04 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:02.494222 | orchestrator | 2026-04-04 00:54:02 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:02.495705 | orchestrator | 2026-04-04 00:54:02 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:02.495807 | orchestrator | 2026-04-04 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:05.544827 | orchestrator | 2026-04-04 00:54:05 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:05.546695 | orchestrator | 2026-04-04 00:54:05 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:05.546792 | orchestrator | 2026-04-04 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:08.585893 | orchestrator | 2026-04-04 00:54:08 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:08.588183 | orchestrator | 2026-04-04 00:54:08 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:08.588231 | orchestrator | 2026-04-04 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:11.631486 | orchestrator | 2026-04-04 00:54:11 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:11.633899 | orchestrator | 2026-04-04 00:54:11 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:11.633984 | orchestrator | 2026-04-04 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:14.666350 | orchestrator | 2026-04-04 00:54:14 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:14.668198 | orchestrator | 2026-04-04 00:54:14 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:14.668462 | orchestrator | 2026-04-04 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:17.711405 | orchestrator | 2026-04-04 00:54:17 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:17.711485 | orchestrator | 2026-04-04 00:54:17 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:17.711492 | orchestrator | 2026-04-04 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:20.759840 | orchestrator | 2026-04-04 00:54:20 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:20.762659 | orchestrator | 2026-04-04 00:54:20 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:20.762728 | orchestrator | 2026-04-04 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:23.799846 | orchestrator | 2026-04-04 00:54:23 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:23.799918 | orchestrator | 2026-04-04 00:54:23 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:23.799924 | orchestrator | 2026-04-04 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:26.834825 | orchestrator | 2026-04-04 00:54:26 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:26.836990 | orchestrator | 2026-04-04 00:54:26 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state STARTED 2026-04-04 00:54:26.837062 | orchestrator | 2026-04-04 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:29.879601 | orchestrator | 2026-04-04 00:54:29 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:29.880036 | orchestrator | 2026-04-04 00:54:29 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:29.880919 | orchestrator | 2026-04-04 00:54:29 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:29.886188 | orchestrator | 2026-04-04 00:54:29 | INFO  | Task 712fe003-f5f9-4782-a066-f5118a557802 is in state SUCCESS 2026-04-04 00:54:29.887845 | orchestrator | 2026-04-04 00:54:29.888070 | orchestrator | 2026-04-04 00:54:29.888091 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:54:29.888105 | orchestrator | 2026-04-04 00:54:29.888117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:54:29.888129 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.461) 0:00:00.461 ******** 2026-04-04 00:54:29.888142 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.888155 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.888168 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.888290 | orchestrator | 2026-04-04 00:54:29.888305 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:54:29.888316 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.412) 0:00:00.874 ******** 2026-04-04 00:54:29.888330 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-04 00:54:29.888342 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-04 00:54:29.888355 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-04 00:54:29.888394 | orchestrator | 2026-04-04 00:54:29.888407 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-04 00:54:29.888420 | orchestrator | 2026-04-04 00:54:29.888446 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-04 00:54:29.888459 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.443) 0:00:01.320 ******** 2026-04-04 00:54:29.888472 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.888485 | orchestrator | 2026-04-04 00:54:29.888499 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-04 00:54:29.888510 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:01.071) 0:00:02.391 ******** 2026-04-04 00:54:29.888524 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.888538 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.888551 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.888563 | orchestrator | 2026-04-04 00:54:29.888576 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-04 00:54:29.888593 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:01.675) 0:00:04.066 ******** 2026-04-04 00:54:29.888608 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.888622 | orchestrator | 2026-04-04 00:54:29.888636 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-04 00:54:29.888649 | orchestrator | Saturday 04 April 2026 00:48:25 +0000 (0:00:00.663) 0:00:04.730 ******** 2026-04-04 00:54:29.888663 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.888676 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.888690 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.888703 | orchestrator | 2026-04-04 00:54:29.888716 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-04 00:54:29.888730 | orchestrator | Saturday 04 April 2026 00:48:26 +0000 (0:00:00.874) 0:00:05.604 ******** 2026-04-04 00:54:29.888744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.888758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.888771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.888832 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.888851 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:54:29.889064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.889083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:54:29.889096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:54:29.889109 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:54:29.889122 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:54:29.889135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:54:29.889151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:54:29.889168 | orchestrator | 2026-04-04 00:54:29.889180 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 00:54:29.889193 | orchestrator | Saturday 04 April 2026 00:48:30 +0000 (0:00:04.357) 0:00:09.961 ******** 2026-04-04 00:54:29.889205 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-04 00:54:29.889216 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-04 00:54:29.889228 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-04 00:54:29.889241 | orchestrator | 2026-04-04 00:54:29.889254 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 00:54:29.889270 | orchestrator | Saturday 04 April 2026 00:48:31 +0000 (0:00:00.764) 0:00:10.726 ******** 2026-04-04 00:54:29.889287 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-04 00:54:29.889301 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-04 00:54:29.889313 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-04 00:54:29.889325 | orchestrator | 2026-04-04 00:54:29.889338 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 00:54:29.889350 | orchestrator | Saturday 04 April 2026 00:48:33 +0000 (0:00:01.508) 0:00:12.235 ******** 2026-04-04 00:54:29.889363 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-04 00:54:29.889376 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.889409 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-04 00:54:29.889423 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.889436 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-04 00:54:29.889448 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.889623 | orchestrator | 2026-04-04 00:54:29.889638 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-04 00:54:29.889650 | orchestrator | Saturday 04 April 2026 00:48:33 +0000 (0:00:00.575) 0:00:12.810 ******** 2026-04-04 00:54:29.889675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.889775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.889793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.889805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.889824 | orchestrator | 2026-04-04 00:54:29.889834 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-04 00:54:29.889844 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:02.148) 0:00:14.959 ******** 2026-04-04 00:54:29.889855 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.889866 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.889876 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.889887 | orchestrator | 2026-04-04 00:54:29.889898 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-04 00:54:29.889908 | orchestrator | Saturday 04 April 2026 00:48:38 +0000 (0:00:02.267) 0:00:17.226 ******** 2026-04-04 00:54:29.889918 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-04 00:54:29.889949 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-04 00:54:29.889961 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-04 00:54:29.889971 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-04 00:54:29.889981 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-04 00:54:29.889993 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-04 00:54:29.890004 | orchestrator | 2026-04-04 00:54:29.890065 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-04 00:54:29.890080 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:02.293) 0:00:19.520 ******** 2026-04-04 00:54:29.890092 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.890104 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.890112 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.890119 | orchestrator | 2026-04-04 00:54:29.890196 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-04 00:54:29.890204 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:01.162) 0:00:20.683 ******** 2026-04-04 00:54:29.890211 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.890217 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.890224 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.890231 | orchestrator | 2026-04-04 00:54:29.890237 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-04 00:54:29.890244 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:01.277) 0:00:21.960 ******** 2026-04-04 00:54:29.890255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.890279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.890292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890377 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.890390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.890403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.890415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890440 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.890461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.890483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.890496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890521 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.890533 | orchestrator | 2026-04-04 00:54:29.890545 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-04 00:54:29.890553 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:01.025) 0:00:22.986 ******** 2026-04-04 00:54:29.890560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.890819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09', '__omit_place_holder__26c0c8e39bd7ddcf4cdd0ae2788f71a078535d09'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:54:29.890826 | orchestrator | 2026-04-04 00:54:29.890833 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-04 00:54:29.890840 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:04.984) 0:00:27.971 ******** 2026-04-04 00:54:29.890847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.890902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.891274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.891296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.891308 | orchestrator | 2026-04-04 00:54:29.891320 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-04 00:54:29.891331 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:03.122) 0:00:31.093 ******** 2026-04-04 00:54:29.891353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:54:29.891365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:54:29.891376 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:54:29.891412 | orchestrator | 2026-04-04 00:54:29.891425 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-04 00:54:29.891436 | orchestrator | Saturday 04 April 2026 00:48:53 +0000 (0:00:01.522) 0:00:32.616 ******** 2026-04-04 00:54:29.891447 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:54:29.891457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:54:29.891469 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:54:29.891480 | orchestrator | 2026-04-04 00:54:29.891510 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-04 00:54:29.891522 | orchestrator | Saturday 04 April 2026 00:48:58 +0000 (0:00:04.905) 0:00:37.521 ******** 2026-04-04 00:54:29.891532 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.891543 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.891553 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.891564 | orchestrator | 2026-04-04 00:54:29.891575 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-04 00:54:29.891586 | orchestrator | Saturday 04 April 2026 00:48:59 +0000 (0:00:00.764) 0:00:38.286 ******** 2026-04-04 00:54:29.891616 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:54:29.891629 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:54:29.891641 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:54:29.891653 | orchestrator | 2026-04-04 00:54:29.891818 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-04 00:54:29.891836 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:01.854) 0:00:40.140 ******** 2026-04-04 00:54:29.891848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:54:29.891860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:54:29.891872 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:54:29.891882 | orchestrator | 2026-04-04 00:54:29.891889 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-04 00:54:29.891896 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:01.679) 0:00:41.820 ******** 2026-04-04 00:54:29.891903 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.891910 | orchestrator | 2026-04-04 00:54:29.891916 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-04 00:54:29.891943 | orchestrator | Saturday 04 April 2026 00:49:03 +0000 (0:00:00.491) 0:00:42.311 ******** 2026-04-04 00:54:29.891956 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-04 00:54:29.891964 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-04 00:54:29.891970 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-04 00:54:29.891978 | orchestrator | 2026-04-04 00:54:29.891989 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-04 00:54:29.892000 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:01.463) 0:00:43.775 ******** 2026-04-04 00:54:29.892021 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-04 00:54:29.892032 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-04 00:54:29.892043 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-04 00:54:29.892054 | orchestrator | 2026-04-04 00:54:29.892064 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-04 00:54:29.892075 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:01.370) 0:00:45.145 ******** 2026-04-04 00:54:29.892087 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.892098 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.892110 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.892122 | orchestrator | 2026-04-04 00:54:29.892129 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-04 00:54:29.892136 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.310) 0:00:45.456 ******** 2026-04-04 00:54:29.892142 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.892149 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.892156 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.892162 | orchestrator | 2026-04-04 00:54:29.892169 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-04 00:54:29.892176 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.246) 0:00:45.703 ******** 2026-04-04 00:54:29.892184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.892290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.892297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.892309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.892316 | orchestrator | 2026-04-04 00:54:29.892323 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-04 00:54:29.892330 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:03.009) 0:00:48.713 ******** 2026-04-04 00:54:29.892340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892374 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.892381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892466 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.892478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892507 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.892514 | orchestrator | 2026-04-04 00:54:29.892521 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-04 00:54:29.892528 | orchestrator | Saturday 04 April 2026 00:49:10 +0000 (0:00:00.632) 0:00:49.345 ******** 2026-04-04 00:54:29.892535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892587 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.892594 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.892601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.892608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.892615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.892622 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.892629 | orchestrator | 2026-04-04 00:54:29.892636 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-04 00:54:29.892643 | orchestrator | Saturday 04 April 2026 00:49:11 +0000 (0:00:00.952) 0:00:50.298 ******** 2026-04-04 00:54:29.892649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:54:29.892656 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:54:29.892663 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:54:29.892670 | orchestrator | 2026-04-04 00:54:29.892676 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-04 00:54:29.892683 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:01.935) 0:00:52.233 ******** 2026-04-04 00:54:29.892690 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:54:29.892700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:54:29.892707 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:54:29.892719 | orchestrator | 2026-04-04 00:54:29.892726 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-04 00:54:29.892732 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:01.750) 0:00:53.984 ******** 2026-04-04 00:54:29.892739 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:54:29.892745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:54:29.892791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:54:29.892804 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:54:29.892815 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:54:29.892836 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.892847 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.892858 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:54:29.892869 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.892881 | orchestrator | 2026-04-04 00:54:29.892892 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-04 00:54:29.892904 | orchestrator | Saturday 04 April 2026 00:49:15 +0000 (0:00:00.864) 0:00:54.849 ******** 2026-04-04 00:54:29.892998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.893103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.893113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.893124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.893135 | orchestrator | 2026-04-04 00:54:29.893147 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-04 00:54:29.893159 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:03.142) 0:00:57.991 ******** 2026-04-04 00:54:29.893171 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:54:29.893217 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.893301 | orchestrator | } 2026-04-04 00:54:29.893356 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:54:29.893370 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.893381 | orchestrator | } 2026-04-04 00:54:29.893393 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:54:29.893406 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.893417 | orchestrator | } 2026-04-04 00:54:29.893428 | orchestrator | 2026-04-04 00:54:29.893441 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:54:29.893529 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.695) 0:00:58.686 ******** 2026-04-04 00:54:29.893555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.893578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.893591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.893603 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.893623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.893636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.893648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.893660 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.893674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.893693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.893712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.893724 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.893736 | orchestrator | 2026-04-04 00:54:29.893748 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-04 00:54:29.893759 | orchestrator | Saturday 04 April 2026 00:49:20 +0000 (0:00:00.821) 0:00:59.508 ******** 2026-04-04 00:54:29.893828 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.893843 | orchestrator | 2026-04-04 00:54:29.893854 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-04 00:54:29.893866 | orchestrator | Saturday 04 April 2026 00:49:20 +0000 (0:00:00.614) 0:01:00.123 ******** 2026-04-04 00:54:29.893921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.894089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.894130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.894265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894342 | orchestrator | 2026-04-04 00:54:29.894349 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-04 00:54:29.894358 | orchestrator | Saturday 04 April 2026 00:49:24 +0000 (0:00:03.157) 0:01:03.281 ******** 2026-04-04 00:54:29.894369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.894381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894452 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.894470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.894485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.894513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.894535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894562 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.894576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.894587 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.894598 | orchestrator | 2026-04-04 00:54:29.894609 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-04 00:54:29.894620 | orchestrator | Saturday 04 April 2026 00:49:24 +0000 (0:00:00.654) 0:01:03.936 ******** 2026-04-04 00:54:29.894661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894686 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.894692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894741 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.894785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.894799 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.894805 | orchestrator | 2026-04-04 00:54:29.894811 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-04 00:54:29.894818 | orchestrator | Saturday 04 April 2026 00:49:26 +0000 (0:00:01.525) 0:01:05.461 ******** 2026-04-04 00:54:29.894824 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.894830 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.894836 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.894871 | orchestrator | 2026-04-04 00:54:29.894906 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-04 00:54:29.894913 | orchestrator | Saturday 04 April 2026 00:49:27 +0000 (0:00:01.314) 0:01:06.776 ******** 2026-04-04 00:54:29.894919 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.894943 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.894950 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.894956 | orchestrator | 2026-04-04 00:54:29.894962 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-04 00:54:29.894969 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:01.918) 0:01:08.694 ******** 2026-04-04 00:54:29.894975 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.894981 | orchestrator | 2026-04-04 00:54:29.894987 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-04 00:54:29.894993 | orchestrator | Saturday 04 April 2026 00:49:30 +0000 (0:00:00.752) 0:01:09.447 ******** 2026-04-04 00:54:29.895005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.895018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.895044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.895076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895096 | orchestrator | 2026-04-04 00:54:29.895102 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-04 00:54:29.895108 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:04.797) 0:01:14.245 ******** 2026-04-04 00:54:29.895115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.895127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895144 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.895153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.895160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895173 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.895183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.895194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.895222 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.895232 | orchestrator | 2026-04-04 00:54:29.895243 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-04 00:54:29.895253 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:00.618) 0:01:14.864 ******** 2026-04-04 00:54:29.895263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895312 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.895324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895345 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.895401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.895423 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.895433 | orchestrator | 2026-04-04 00:54:29.895443 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-04 00:54:29.895454 | orchestrator | Saturday 04 April 2026 00:49:36 +0000 (0:00:00.818) 0:01:15.683 ******** 2026-04-04 00:54:29.895464 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.895485 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.895496 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.895506 | orchestrator | 2026-04-04 00:54:29.895545 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-04 00:54:29.895557 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:01.361) 0:01:17.045 ******** 2026-04-04 00:54:29.895568 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.895579 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.895589 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.895599 | orchestrator | 2026-04-04 00:54:29.895610 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-04 00:54:29.895620 | orchestrator | Saturday 04 April 2026 00:49:39 +0000 (0:00:01.944) 0:01:18.989 ******** 2026-04-04 00:54:29.895630 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.895641 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.895652 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.895663 | orchestrator | 2026-04-04 00:54:29.895681 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-04 00:54:29.895693 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:00.466) 0:01:19.455 ******** 2026-04-04 00:54:29.895704 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.895715 | orchestrator | 2026-04-04 00:54:29.895726 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-04 00:54:29.895738 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:00.681) 0:01:20.137 ******** 2026-04-04 00:54:29.895750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:54:29.895763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:54:29.895775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:54:29.895785 | orchestrator | 2026-04-04 00:54:29.895823 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-04 00:54:29.895878 | orchestrator | Saturday 04 April 2026 00:49:44 +0000 (0:00:03.487) 0:01:23.624 ******** 2026-04-04 00:54:29.895893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:54:29.895904 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.895998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:54:29.896016 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:54:29.896045 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896056 | orchestrator | 2026-04-04 00:54:29.896066 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-04 00:54:29.896077 | orchestrator | Saturday 04 April 2026 00:49:47 +0000 (0:00:03.069) 0:01:26.693 ******** 2026-04-04 00:54:29.896089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896121 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896152 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.896162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:54:29.896190 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896200 | orchestrator | 2026-04-04 00:54:29.896209 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-04 00:54:29.896219 | orchestrator | Saturday 04 April 2026 00:49:50 +0000 (0:00:02.678) 0:01:29.372 ******** 2026-04-04 00:54:29.896229 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896239 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.896248 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896258 | orchestrator | 2026-04-04 00:54:29.896268 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-04 00:54:29.896277 | orchestrator | Saturday 04 April 2026 00:49:50 +0000 (0:00:00.385) 0:01:29.758 ******** 2026-04-04 00:54:29.896287 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896297 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.896311 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896322 | orchestrator | 2026-04-04 00:54:29.896332 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-04 00:54:29.896342 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:00.992) 0:01:30.750 ******** 2026-04-04 00:54:29.896351 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.896361 | orchestrator | 2026-04-04 00:54:29.896371 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-04 00:54:29.896381 | orchestrator | Saturday 04 April 2026 00:49:52 +0000 (0:00:00.740) 0:01:31.490 ******** 2026-04-04 00:54:29.896391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.896412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.896455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.896514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896555 | orchestrator | 2026-04-04 00:54:29.896564 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-04 00:54:29.896573 | orchestrator | Saturday 04 April 2026 00:49:55 +0000 (0:00:03.267) 0:01:34.758 ******** 2026-04-04 00:54:29.896584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.896600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.896616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896661 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.896670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896695 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.896728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.896759 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896769 | orchestrator | 2026-04-04 00:54:29.896778 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-04 00:54:29.896788 | orchestrator | Saturday 04 April 2026 00:49:56 +0000 (0:00:00.768) 0:01:35.526 ******** 2026-04-04 00:54:29.896798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896827 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.896838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896866 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.896880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.896901 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.896911 | orchestrator | 2026-04-04 00:54:29.896921 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-04 00:54:29.896945 | orchestrator | Saturday 04 April 2026 00:49:57 +0000 (0:00:00.974) 0:01:36.501 ******** 2026-04-04 00:54:29.896956 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.896966 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.896976 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.896985 | orchestrator | 2026-04-04 00:54:29.896994 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-04 00:54:29.897004 | orchestrator | Saturday 04 April 2026 00:49:58 +0000 (0:00:01.215) 0:01:37.717 ******** 2026-04-04 00:54:29.897014 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.897023 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.897032 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.897041 | orchestrator | 2026-04-04 00:54:29.897052 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-04 00:54:29.897061 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:01.835) 0:01:39.553 ******** 2026-04-04 00:54:29.897070 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.897079 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.897088 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.897097 | orchestrator | 2026-04-04 00:54:29.897105 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-04 00:54:29.897115 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:00.298) 0:01:39.851 ******** 2026-04-04 00:54:29.897124 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.897133 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.897142 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.897151 | orchestrator | 2026-04-04 00:54:29.897160 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-04 00:54:29.897170 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.407) 0:01:40.258 ******** 2026-04-04 00:54:29.897180 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.897189 | orchestrator | 2026-04-04 00:54:29.897199 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-04 00:54:29.897209 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.744) 0:01:41.003 ******** 2026-04-04 00:54:29.897219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.897247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.897264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.897343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.897354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.897373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.897430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897513 | orchestrator | 2026-04-04 00:54:29.897522 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-04 00:54:29.897531 | orchestrator | Saturday 04 April 2026 00:50:05 +0000 (0:00:03.260) 0:01:44.264 ******** 2026-04-04 00:54:29.897545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.897555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.897564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897628 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.897638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.897648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.897663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.897673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.898120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:54:29.898139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898151 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.898163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.898195 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.898201 | orchestrator | 2026-04-04 00:54:29.898207 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-04 00:54:29.898213 | orchestrator | Saturday 04 April 2026 00:50:05 +0000 (0:00:00.863) 0:01:45.127 ******** 2026-04-04 00:54:29.898219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898233 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.898239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898251 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.898257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.898273 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.898316 | orchestrator | 2026-04-04 00:54:29.898324 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-04 00:54:29.898329 | orchestrator | Saturday 04 April 2026 00:50:06 +0000 (0:00:00.865) 0:01:45.993 ******** 2026-04-04 00:54:29.898335 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.898340 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.898359 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.898365 | orchestrator | 2026-04-04 00:54:29.898370 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-04 00:54:29.898376 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:01.198) 0:01:47.192 ******** 2026-04-04 00:54:29.898388 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.898394 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.898400 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.898406 | orchestrator | 2026-04-04 00:54:29.898419 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-04 00:54:29.898425 | orchestrator | Saturday 04 April 2026 00:50:09 +0000 (0:00:01.893) 0:01:49.086 ******** 2026-04-04 00:54:29.898457 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.898464 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.898470 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.898476 | orchestrator | 2026-04-04 00:54:29.898481 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-04 00:54:29.898487 | orchestrator | Saturday 04 April 2026 00:50:10 +0000 (0:00:00.244) 0:01:49.331 ******** 2026-04-04 00:54:29.898493 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.898499 | orchestrator | 2026-04-04 00:54:29.898504 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-04 00:54:29.898510 | orchestrator | Saturday 04 April 2026 00:50:11 +0000 (0:00:00.908) 0:01:50.239 ******** 2026-04-04 00:54:29.898532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:54:29.898547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:54:29.898569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:54:29.898581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898600 | orchestrator | 2026-04-04 00:54:29.898606 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-04 00:54:29.898612 | orchestrator | Saturday 04 April 2026 00:50:15 +0000 (0:00:04.017) 0:01:54.256 ******** 2026-04-04 00:54:29.898620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:54:29.898630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898636 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.898649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:54:29.898659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898667 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.898680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:54:29.898695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.898705 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.898714 | orchestrator | 2026-04-04 00:54:29.898733 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-04 00:54:29.898744 | orchestrator | Saturday 04 April 2026 00:50:18 +0000 (0:00:02.987) 0:01:57.244 ******** 2026-04-04 00:54:29.898754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898779 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.898794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898820 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.898827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:54:29.898841 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.898856 | orchestrator | 2026-04-04 00:54:29.898863 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-04 00:54:29.898869 | orchestrator | Saturday 04 April 2026 00:50:21 +0000 (0:00:03.395) 0:02:00.640 ******** 2026-04-04 00:54:29.898881 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.898888 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.898903 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.898914 | orchestrator | 2026-04-04 00:54:29.898921 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-04 00:54:29.898942 | orchestrator | Saturday 04 April 2026 00:50:22 +0000 (0:00:01.035) 0:02:01.675 ******** 2026-04-04 00:54:29.898949 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.898980 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.898987 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.898993 | orchestrator | 2026-04-04 00:54:29.898999 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-04 00:54:29.899017 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:01.598) 0:02:03.273 ******** 2026-04-04 00:54:29.899024 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899030 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899037 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899043 | orchestrator | 2026-04-04 00:54:29.899049 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-04 00:54:29.899056 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:00.244) 0:02:03.518 ******** 2026-04-04 00:54:29.899062 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.899068 | orchestrator | 2026-04-04 00:54:29.899074 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-04 00:54:29.899080 | orchestrator | Saturday 04 April 2026 00:50:25 +0000 (0:00:00.870) 0:02:04.388 ******** 2026-04-04 00:54:29.899112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.899122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.899128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.899134 | orchestrator | 2026-04-04 00:54:29.899139 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-04 00:54:29.899145 | orchestrator | Saturday 04 April 2026 00:50:27 +0000 (0:00:02.715) 0:02:07.104 ******** 2026-04-04 00:54:29.899150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.899156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.899165 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899171 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.899186 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899191 | orchestrator | 2026-04-04 00:54:29.899196 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-04 00:54:29.899202 | orchestrator | Saturday 04 April 2026 00:50:28 +0000 (0:00:00.363) 0:02:07.468 ******** 2026-04-04 00:54:29.899207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899233 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899238 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.899255 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899260 | orchestrator | 2026-04-04 00:54:29.899266 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-04 00:54:29.899271 | orchestrator | Saturday 04 April 2026 00:50:29 +0000 (0:00:00.869) 0:02:08.337 ******** 2026-04-04 00:54:29.899276 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.899282 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.899288 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.899297 | orchestrator | 2026-04-04 00:54:29.899306 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-04 00:54:29.899315 | orchestrator | Saturday 04 April 2026 00:50:30 +0000 (0:00:01.454) 0:02:09.791 ******** 2026-04-04 00:54:29.899324 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.899332 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.899340 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.899349 | orchestrator | 2026-04-04 00:54:29.899364 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-04 00:54:29.899374 | orchestrator | Saturday 04 April 2026 00:50:32 +0000 (0:00:02.136) 0:02:11.928 ******** 2026-04-04 00:54:29.899380 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899386 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899391 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899396 | orchestrator | 2026-04-04 00:54:29.899401 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-04 00:54:29.899408 | orchestrator | Saturday 04 April 2026 00:50:33 +0000 (0:00:00.379) 0:02:12.307 ******** 2026-04-04 00:54:29.899417 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.899426 | orchestrator | 2026-04-04 00:54:29.899435 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-04 00:54:29.899444 | orchestrator | Saturday 04 April 2026 00:50:34 +0000 (0:00:01.473) 0:02:13.781 ******** 2026-04-04 00:54:29.899460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:54:29.899494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:54:29.899524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:54:29.899532 | orchestrator | 2026-04-04 00:54:29.899537 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-04 00:54:29.899543 | orchestrator | Saturday 04 April 2026 00:50:38 +0000 (0:00:03.893) 0:02:17.674 ******** 2026-04-04 00:54:29.899552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:54:29.899562 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:54:29.899581 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:54:29.899596 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899602 | orchestrator | 2026-04-04 00:54:29.899607 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-04 00:54:29.899612 | orchestrator | Saturday 04 April 2026 00:50:39 +0000 (0:00:00.666) 0:02:18.341 ******** 2026-04-04 00:54:29.899620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:54:29.899654 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:54:29.899697 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-04 00:54:29.899716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:54:29.899722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:54:29.899727 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899736 | orchestrator | 2026-04-04 00:54:29.899741 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-04 00:54:29.899749 | orchestrator | Saturday 04 April 2026 00:50:40 +0000 (0:00:01.765) 0:02:20.106 ******** 2026-04-04 00:54:29.899758 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.899767 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.899776 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.899784 | orchestrator | 2026-04-04 00:54:29.899793 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-04 00:54:29.899802 | orchestrator | Saturday 04 April 2026 00:50:42 +0000 (0:00:01.238) 0:02:21.344 ******** 2026-04-04 00:54:29.899812 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.899820 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.899829 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.899839 | orchestrator | 2026-04-04 00:54:29.899847 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-04 00:54:29.899856 | orchestrator | Saturday 04 April 2026 00:50:44 +0000 (0:00:02.521) 0:02:23.866 ******** 2026-04-04 00:54:29.899866 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899875 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899884 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899892 | orchestrator | 2026-04-04 00:54:29.899898 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-04 00:54:29.899903 | orchestrator | Saturday 04 April 2026 00:50:45 +0000 (0:00:00.589) 0:02:24.455 ******** 2026-04-04 00:54:29.899908 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.899914 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.899919 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.899964 | orchestrator | 2026-04-04 00:54:29.899970 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-04 00:54:29.899975 | orchestrator | Saturday 04 April 2026 00:50:45 +0000 (0:00:00.280) 0:02:24.736 ******** 2026-04-04 00:54:29.899981 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.899986 | orchestrator | 2026-04-04 00:54:29.899991 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-04 00:54:29.899997 | orchestrator | Saturday 04 April 2026 00:50:46 +0000 (0:00:01.290) 0:02:26.026 ******** 2026-04-04 00:54:29.900003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 00:54:29.901366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 00:54:29.901427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 00:54:29.901469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901486 | orchestrator | 2026-04-04 00:54:29.901495 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-04 00:54:29.901504 | orchestrator | Saturday 04 April 2026 00:50:50 +0000 (0:00:03.829) 0:02:29.855 ******** 2026-04-04 00:54:29.901513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 00:54:29.901522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 00:54:29.901551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901558 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.901567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901584 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.901592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 00:54:29.901600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:54:29.901615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:54:29.901624 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.901632 | orchestrator | 2026-04-04 00:54:29.901639 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-04 00:54:29.901647 | orchestrator | Saturday 04 April 2026 00:50:51 +0000 (0:00:00.754) 0:02:30.609 ******** 2026-04-04 00:54:29.901658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901677 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.901687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901704 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.901712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-04 00:54:29.901727 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.901734 | orchestrator | 2026-04-04 00:54:29.901742 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-04 00:54:29.901750 | orchestrator | Saturday 04 April 2026 00:50:52 +0000 (0:00:01.119) 0:02:31.729 ******** 2026-04-04 00:54:29.901757 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.901764 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.901771 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.901779 | orchestrator | 2026-04-04 00:54:29.901787 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-04 00:54:29.901794 | orchestrator | Saturday 04 April 2026 00:50:53 +0000 (0:00:01.248) 0:02:32.977 ******** 2026-04-04 00:54:29.901801 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.901809 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.901816 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.901824 | orchestrator | 2026-04-04 00:54:29.901833 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-04 00:54:29.901849 | orchestrator | Saturday 04 April 2026 00:50:56 +0000 (0:00:02.220) 0:02:35.197 ******** 2026-04-04 00:54:29.901856 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.901864 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.901871 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.901879 | orchestrator | 2026-04-04 00:54:29.901886 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-04 00:54:29.901894 | orchestrator | Saturday 04 April 2026 00:50:56 +0000 (0:00:00.312) 0:02:35.510 ******** 2026-04-04 00:54:29.901901 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.901908 | orchestrator | 2026-04-04 00:54:29.901916 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-04 00:54:29.901940 | orchestrator | Saturday 04 April 2026 00:50:57 +0000 (0:00:01.292) 0:02:36.803 ******** 2026-04-04 00:54:29.901956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.901970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.901980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.901988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.902049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902059 | orchestrator | 2026-04-04 00:54:29.902071 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-04 00:54:29.902078 | orchestrator | Saturday 04 April 2026 00:51:02 +0000 (0:00:04.781) 0:02:41.585 ******** 2026-04-04 00:54:29.902087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902107 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.902115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902137 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.902148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902165 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.902180 | orchestrator | 2026-04-04 00:54:29.902189 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-04 00:54:29.902197 | orchestrator | Saturday 04 April 2026 00:51:03 +0000 (0:00:00.866) 0:02:42.452 ******** 2026-04-04 00:54:29.902205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902224 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.902233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902250 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.902259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902276 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.902284 | orchestrator | 2026-04-04 00:54:29.902303 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-04 00:54:29.902313 | orchestrator | Saturday 04 April 2026 00:51:04 +0000 (0:00:01.314) 0:02:43.767 ******** 2026-04-04 00:54:29.902322 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.902330 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.902338 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.902347 | orchestrator | 2026-04-04 00:54:29.902355 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-04 00:54:29.902364 | orchestrator | Saturday 04 April 2026 00:51:05 +0000 (0:00:01.335) 0:02:45.102 ******** 2026-04-04 00:54:29.902372 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.902381 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.902389 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.902398 | orchestrator | 2026-04-04 00:54:29.902406 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-04 00:54:29.902415 | orchestrator | Saturday 04 April 2026 00:51:07 +0000 (0:00:01.905) 0:02:47.007 ******** 2026-04-04 00:54:29.902423 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.902431 | orchestrator | 2026-04-04 00:54:29.902440 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-04 00:54:29.902452 | orchestrator | Saturday 04 April 2026 00:51:09 +0000 (0:00:01.218) 0:02:48.225 ******** 2026-04-04 00:54:29.902461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.902478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.902523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.902564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902604 | orchestrator | 2026-04-04 00:54:29.902613 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-04 00:54:29.902621 | orchestrator | Saturday 04 April 2026 00:51:12 +0000 (0:00:02.974) 0:02:51.200 ******** 2026-04-04 00:54:29.902631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902716 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.902725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.902753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902761 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.902775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.902812 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.902820 | orchestrator | 2026-04-04 00:54:29.902829 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-04 00:54:29.902838 | orchestrator | Saturday 04 April 2026 00:51:12 +0000 (0:00:00.706) 0:02:51.907 ******** 2026-04-04 00:54:29.902847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902886 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.902894 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.902903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.902921 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.902945 | orchestrator | 2026-04-04 00:54:29.902954 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-04 00:54:29.902962 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:01.113) 0:02:53.020 ******** 2026-04-04 00:54:29.902971 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.902979 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.902988 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.902996 | orchestrator | 2026-04-04 00:54:29.903005 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-04 00:54:29.903013 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:01.241) 0:02:54.262 ******** 2026-04-04 00:54:29.903021 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.903030 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.903038 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.903052 | orchestrator | 2026-04-04 00:54:29.903061 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-04 00:54:29.903074 | orchestrator | Saturday 04 April 2026 00:51:16 +0000 (0:00:01.728) 0:02:55.990 ******** 2026-04-04 00:54:29.903084 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.903092 | orchestrator | 2026-04-04 00:54:29.903100 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-04 00:54:29.903109 | orchestrator | Saturday 04 April 2026 00:51:17 +0000 (0:00:00.791) 0:02:56.781 ******** 2026-04-04 00:54:29.903117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:54:29.903125 | orchestrator | 2026-04-04 00:54:29.903134 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-04 00:54:29.903142 | orchestrator | Saturday 04 April 2026 00:51:20 +0000 (0:00:02.848) 0:02:59.630 ******** 2026-04-04 00:54:29.903156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903174 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903216 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903251 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903260 | orchestrator | 2026-04-04 00:54:29.903268 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-04 00:54:29.903276 | orchestrator | Saturday 04 April 2026 00:51:23 +0000 (0:00:03.031) 0:03:02.661 ******** 2026-04-04 00:54:29.903294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903311 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903348 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:29.903369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:54:29.903382 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903390 | orchestrator | 2026-04-04 00:54:29.903398 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-04 00:54:29.903406 | orchestrator | Saturday 04 April 2026 00:51:25 +0000 (0:00:01.658) 0:03:04.319 ******** 2026-04-04 00:54:29.903414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903437 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903463 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:54:29.903494 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903503 | orchestrator | 2026-04-04 00:54:29.903511 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-04 00:54:29.903519 | orchestrator | Saturday 04 April 2026 00:51:28 +0000 (0:00:02.966) 0:03:07.286 ******** 2026-04-04 00:54:29.903527 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.903535 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.903543 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.903551 | orchestrator | 2026-04-04 00:54:29.903559 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-04 00:54:29.903567 | orchestrator | Saturday 04 April 2026 00:51:30 +0000 (0:00:02.305) 0:03:09.591 ******** 2026-04-04 00:54:29.903575 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903582 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903590 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903598 | orchestrator | 2026-04-04 00:54:29.903606 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-04 00:54:29.903614 | orchestrator | Saturday 04 April 2026 00:51:31 +0000 (0:00:01.316) 0:03:10.908 ******** 2026-04-04 00:54:29.903623 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903631 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903639 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903647 | orchestrator | 2026-04-04 00:54:29.903655 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-04 00:54:29.903663 | orchestrator | Saturday 04 April 2026 00:51:32 +0000 (0:00:00.364) 0:03:11.273 ******** 2026-04-04 00:54:29.903671 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.903679 | orchestrator | 2026-04-04 00:54:29.903687 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-04 00:54:29.903699 | orchestrator | Saturday 04 April 2026 00:51:32 +0000 (0:00:00.822) 0:03:12.095 ******** 2026-04-04 00:54:29.903708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:54:29.903720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:54:29.903729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:54:29.903741 | orchestrator | 2026-04-04 00:54:29.903750 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-04 00:54:29.903758 | orchestrator | Saturday 04 April 2026 00:51:34 +0000 (0:00:01.618) 0:03:13.714 ******** 2026-04-04 00:54:29.903766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:54:29.903774 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:54:29.903789 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:54:29.903809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903817 | orchestrator | 2026-04-04 00:54:29.903824 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-04 00:54:29.903835 | orchestrator | Saturday 04 April 2026 00:51:34 +0000 (0:00:00.322) 0:03:14.036 ******** 2026-04-04 00:54:29.903842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:54:29.903851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:54:29.903864 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903872 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:54:29.903887 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903896 | orchestrator | 2026-04-04 00:54:29.903904 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-04 00:54:29.903911 | orchestrator | Saturday 04 April 2026 00:51:35 +0000 (0:00:00.550) 0:03:14.587 ******** 2026-04-04 00:54:29.903919 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.903967 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.903976 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.903984 | orchestrator | 2026-04-04 00:54:29.903991 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-04 00:54:29.903999 | orchestrator | Saturday 04 April 2026 00:51:36 +0000 (0:00:00.724) 0:03:15.312 ******** 2026-04-04 00:54:29.904007 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.904014 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.904022 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.904030 | orchestrator | 2026-04-04 00:54:29.904038 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-04 00:54:29.904045 | orchestrator | Saturday 04 April 2026 00:51:37 +0000 (0:00:00.920) 0:03:16.232 ******** 2026-04-04 00:54:29.904053 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.904062 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.904070 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.904078 | orchestrator | 2026-04-04 00:54:29.904086 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-04 00:54:29.904095 | orchestrator | Saturday 04 April 2026 00:51:37 +0000 (0:00:00.406) 0:03:16.638 ******** 2026-04-04 00:54:29.904103 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.904110 | orchestrator | 2026-04-04 00:54:29.904116 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-04 00:54:29.904123 | orchestrator | Saturday 04 April 2026 00:51:38 +0000 (0:00:01.045) 0:03:17.684 ******** 2026-04-04 00:54:29.904132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.904150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.904183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.904193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.904246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.904255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.904274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.904312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.904332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.904354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.904369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.904440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.904666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.904692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.904711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.904790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.904812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.904823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.904840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.904903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.904953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.904962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.904970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.904979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.905042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.905075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905082 | orchestrator | 2026-04-04 00:54:29.905090 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-04 00:54:29.905097 | orchestrator | Saturday 04 April 2026 00:51:45 +0000 (0:00:06.589) 0:03:24.273 ******** 2026-04-04 00:54:29.905105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.905173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.905195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.905200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.905286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.905301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.905386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905395 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.905404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.905412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.905487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.905494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.905586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.905611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.905627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.905715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-04 00:54:29.905723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905747 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.905756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-04 00:54:29.905808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-04 00:54:29.905847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-04 00:54:29.905920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-04 00:54:29.905948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.905957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:54:29.905983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:54:29.905992 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.906001 | orchestrator | 2026-04-04 00:54:29.906009 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-04 00:54:29.906045 | orchestrator | Saturday 04 April 2026 00:51:47 +0000 (0:00:02.663) 0:03:26.936 ******** 2026-04-04 00:54:29.906053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906072 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.906080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906095 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.906152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.906181 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.906189 | orchestrator | 2026-04-04 00:54:29.906197 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-04 00:54:29.906204 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:01.290) 0:03:28.227 ******** 2026-04-04 00:54:29.906212 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.906220 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.906228 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.906236 | orchestrator | 2026-04-04 00:54:29.906244 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-04 00:54:29.906252 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:01.313) 0:03:29.541 ******** 2026-04-04 00:54:29.906259 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.906271 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.906279 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.906287 | orchestrator | 2026-04-04 00:54:29.906294 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-04 00:54:29.906302 | orchestrator | Saturday 04 April 2026 00:51:52 +0000 (0:00:01.699) 0:03:31.241 ******** 2026-04-04 00:54:29.906310 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.906326 | orchestrator | 2026-04-04 00:54:29.906335 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-04 00:54:29.906342 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:01.291) 0:03:32.532 ******** 2026-04-04 00:54:29.906351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.906360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.906419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.906431 | orchestrator | 2026-04-04 00:54:29.906439 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-04 00:54:29.906446 | orchestrator | Saturday 04 April 2026 00:51:56 +0000 (0:00:03.021) 0:03:35.554 ******** 2026-04-04 00:54:29.906458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.906475 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.906483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.906491 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.906499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.906507 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.906515 | orchestrator | 2026-04-04 00:54:29.906523 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-04 00:54:29.906530 | orchestrator | Saturday 04 April 2026 00:51:57 +0000 (0:00:00.895) 0:03:36.449 ******** 2026-04-04 00:54:29.906580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906599 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.906606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906631 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.906639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.906655 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.906662 | orchestrator | 2026-04-04 00:54:29.906670 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-04 00:54:29.906678 | orchestrator | Saturday 04 April 2026 00:51:57 +0000 (0:00:00.675) 0:03:37.125 ******** 2026-04-04 00:54:29.906697 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.906702 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.906707 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.906711 | orchestrator | 2026-04-04 00:54:29.906716 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-04 00:54:29.906720 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:01.167) 0:03:38.292 ******** 2026-04-04 00:54:29.906725 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.906729 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.906734 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.906738 | orchestrator | 2026-04-04 00:54:29.906743 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-04 00:54:29.906747 | orchestrator | Saturday 04 April 2026 00:52:01 +0000 (0:00:01.936) 0:03:40.228 ******** 2026-04-04 00:54:29.906752 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.906756 | orchestrator | 2026-04-04 00:54:29.906761 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-04 00:54:29.906765 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:01.277) 0:03:41.505 ******** 2026-04-04 00:54:29.906770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.906826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.906852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.906862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.906871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.906880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.906972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.906990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.906999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.907008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907061 | orchestrator | 2026-04-04 00:54:29.907066 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-04 00:54:29.907070 | orchestrator | Saturday 04 April 2026 00:52:07 +0000 (0:00:05.094) 0:03:46.600 ******** 2026-04-04 00:54:29.907079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907102 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907156 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.907193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.907203 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907209 | orchestrator | 2026-04-04 00:54:29.907217 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-04 00:54:29.907225 | orchestrator | Saturday 04 April 2026 00:52:08 +0000 (0:00:00.653) 0:03:47.254 ******** 2026-04-04 00:54:29.907235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907263 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907312 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.907352 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907359 | orchestrator | 2026-04-04 00:54:29.907366 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-04 00:54:29.907374 | orchestrator | Saturday 04 April 2026 00:52:09 +0000 (0:00:01.696) 0:03:48.950 ******** 2026-04-04 00:54:29.907381 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.907389 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.907397 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.907404 | orchestrator | 2026-04-04 00:54:29.907413 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-04 00:54:29.907418 | orchestrator | Saturday 04 April 2026 00:52:10 +0000 (0:00:01.127) 0:03:50.078 ******** 2026-04-04 00:54:29.907422 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.907427 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.907431 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.907436 | orchestrator | 2026-04-04 00:54:29.907441 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-04 00:54:29.907452 | orchestrator | Saturday 04 April 2026 00:52:13 +0000 (0:00:02.109) 0:03:52.187 ******** 2026-04-04 00:54:29.907456 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.907461 | orchestrator | 2026-04-04 00:54:29.907465 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-04 00:54:29.907470 | orchestrator | Saturday 04 April 2026 00:52:14 +0000 (0:00:01.527) 0:03:53.715 ******** 2026-04-04 00:54:29.907474 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2026-04-04 00:54:29.907479 | orchestrator | 2026-04-04 00:54:29.907484 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-04 00:54:29.907489 | orchestrator | Saturday 04 April 2026 00:52:15 +0000 (0:00:01.135) 0:03:54.851 ******** 2026-04-04 00:54:29.907494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:54:29.907501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:54:29.907531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:54:29.907536 | orchestrator | 2026-04-04 00:54:29.907541 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-04 00:54:29.907545 | orchestrator | Saturday 04 April 2026 00:52:20 +0000 (0:00:04.776) 0:03:59.627 ******** 2026-04-04 00:54:29.907552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907561 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907565 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907583 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907590 | orchestrator | 2026-04-04 00:54:29.907598 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-04 00:54:29.907605 | orchestrator | Saturday 04 April 2026 00:52:22 +0000 (0:00:01.565) 0:04:01.192 ******** 2026-04-04 00:54:29.907612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907624 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907639 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:54:29.907654 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907659 | orchestrator | 2026-04-04 00:54:29.907663 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:54:29.907668 | orchestrator | Saturday 04 April 2026 00:52:23 +0000 (0:00:01.853) 0:04:03.046 ******** 2026-04-04 00:54:29.907673 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.907678 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.907683 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.907688 | orchestrator | 2026-04-04 00:54:29.907708 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:54:29.907714 | orchestrator | Saturday 04 April 2026 00:52:26 +0000 (0:00:02.731) 0:04:05.778 ******** 2026-04-04 00:54:29.907719 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.907723 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.907728 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.907733 | orchestrator | 2026-04-04 00:54:29.907738 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-04 00:54:29.907742 | orchestrator | Saturday 04 April 2026 00:52:29 +0000 (0:00:02.963) 0:04:08.741 ******** 2026-04-04 00:54:29.907748 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-04 00:54:29.907753 | orchestrator | 2026-04-04 00:54:29.907757 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-04 00:54:29.907769 | orchestrator | Saturday 04 April 2026 00:52:30 +0000 (0:00:00.895) 0:04:09.637 ******** 2026-04-04 00:54:29.907776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907782 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907792 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907802 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907806 | orchestrator | 2026-04-04 00:54:29.907811 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-04 00:54:29.907817 | orchestrator | Saturday 04 April 2026 00:52:31 +0000 (0:00:01.176) 0:04:10.813 ******** 2026-04-04 00:54:29.907821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907826 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907836 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:54:29.907862 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907867 | orchestrator | 2026-04-04 00:54:29.907872 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-04 00:54:29.907877 | orchestrator | Saturday 04 April 2026 00:52:32 +0000 (0:00:01.185) 0:04:11.999 ******** 2026-04-04 00:54:29.907883 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.907891 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.907898 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.907905 | orchestrator | 2026-04-04 00:54:29.907912 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:54:29.907919 | orchestrator | Saturday 04 April 2026 00:52:34 +0000 (0:00:01.396) 0:04:13.395 ******** 2026-04-04 00:54:29.907939 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.907946 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.907952 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.907957 | orchestrator | 2026-04-04 00:54:29.907967 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:54:29.907974 | orchestrator | Saturday 04 April 2026 00:52:36 +0000 (0:00:01.916) 0:04:15.312 ******** 2026-04-04 00:54:29.907981 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.907988 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.907995 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.908002 | orchestrator | 2026-04-04 00:54:29.908009 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-04 00:54:29.908016 | orchestrator | Saturday 04 April 2026 00:52:38 +0000 (0:00:02.547) 0:04:17.859 ******** 2026-04-04 00:54:29.908023 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-04 00:54:29.908031 | orchestrator | 2026-04-04 00:54:29.908038 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-04 00:54:29.908045 | orchestrator | Saturday 04 April 2026 00:52:40 +0000 (0:00:01.515) 0:04:19.375 ******** 2026-04-04 00:54:29.908062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908071 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.908076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908080 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.908084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908089 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.908093 | orchestrator | 2026-04-04 00:54:29.908097 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-04 00:54:29.908105 | orchestrator | Saturday 04 April 2026 00:52:41 +0000 (0:00:01.322) 0:04:20.697 ******** 2026-04-04 00:54:29.908110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908115 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.908139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908144 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.908151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:54:29.908156 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.908160 | orchestrator | 2026-04-04 00:54:29.908164 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-04 00:54:29.908168 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:01.110) 0:04:21.808 ******** 2026-04-04 00:54:29.908172 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.908176 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.908180 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.908184 | orchestrator | 2026-04-04 00:54:29.908189 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:54:29.908193 | orchestrator | Saturday 04 April 2026 00:52:44 +0000 (0:00:01.452) 0:04:23.261 ******** 2026-04-04 00:54:29.908197 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.908201 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.908205 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.908209 | orchestrator | 2026-04-04 00:54:29.908213 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:54:29.908217 | orchestrator | Saturday 04 April 2026 00:52:46 +0000 (0:00:02.104) 0:04:25.365 ******** 2026-04-04 00:54:29.908221 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.908225 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.908229 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.908233 | orchestrator | 2026-04-04 00:54:29.908237 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-04 00:54:29.908242 | orchestrator | Saturday 04 April 2026 00:52:49 +0000 (0:00:03.295) 0:04:28.661 ******** 2026-04-04 00:54:29.908246 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.908253 | orchestrator | 2026-04-04 00:54:29.908260 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-04 00:54:29.908267 | orchestrator | Saturday 04 April 2026 00:52:51 +0000 (0:00:01.763) 0:04:30.425 ******** 2026-04-04 00:54:29.908274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:54:29.908307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:54:29.908316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:54:29.908387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908443 | orchestrator | 2026-04-04 00:54:29.908449 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-04 00:54:29.908456 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:03.858) 0:04:34.284 ******** 2026-04-04 00:54:29.908466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:54:29.908474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908511 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.908540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:54:29.908552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908574 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.908578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:54:29.908595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:54:29.908601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:54:29.908656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:54:29.908660 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.908665 | orchestrator | 2026-04-04 00:54:29.908669 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-04 00:54:29.908674 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:00.615) 0:04:34.900 ******** 2026-04-04 00:54:29.908678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908687 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.908692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908700 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.908704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:54:29.908713 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.908717 | orchestrator | 2026-04-04 00:54:29.908721 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-04 00:54:29.908725 | orchestrator | Saturday 04 April 2026 00:52:56 +0000 (0:00:00.826) 0:04:35.726 ******** 2026-04-04 00:54:29.908729 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.908734 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.908738 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.908742 | orchestrator | 2026-04-04 00:54:29.908746 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-04 00:54:29.908770 | orchestrator | Saturday 04 April 2026 00:52:57 +0000 (0:00:01.235) 0:04:36.962 ******** 2026-04-04 00:54:29.908778 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.908785 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.908791 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.908799 | orchestrator | 2026-04-04 00:54:29.908806 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-04 00:54:29.908813 | orchestrator | Saturday 04 April 2026 00:52:59 +0000 (0:00:01.964) 0:04:38.927 ******** 2026-04-04 00:54:29.908820 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.908826 | orchestrator | 2026-04-04 00:54:29.908830 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-04 00:54:29.908838 | orchestrator | Saturday 04 April 2026 00:53:01 +0000 (0:00:01.387) 0:04:40.314 ******** 2026-04-04 00:54:29.908845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.908851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.908858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.908886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:54:29.908904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:54:29.908913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:54:29.908920 | orchestrator | 2026-04-04 00:54:29.908939 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-04 00:54:29.908945 | orchestrator | Saturday 04 April 2026 00:53:06 +0000 (0:00:05.638) 0:04:45.952 ******** 2026-04-04 00:54:29.908953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.908981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:54:29.908996 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.909006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.909014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:54:29.909021 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.909025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.909044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:54:29.909060 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.909068 | orchestrator | 2026-04-04 00:54:29.909075 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-04 00:54:29.909083 | orchestrator | Saturday 04 April 2026 00:53:07 +0000 (0:00:00.967) 0:04:46.920 ******** 2026-04-04 00:54:29.909090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.909098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909114 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.909121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.909129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909138 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.909145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.909153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-04 00:54:29.909171 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.909179 | orchestrator | 2026-04-04 00:54:29.909186 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-04 00:54:29.909193 | orchestrator | Saturday 04 April 2026 00:53:08 +0000 (0:00:00.883) 0:04:47.804 ******** 2026-04-04 00:54:29.909199 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.909206 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.909213 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.909220 | orchestrator | 2026-04-04 00:54:29.909227 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-04 00:54:29.909254 | orchestrator | Saturday 04 April 2026 00:53:09 +0000 (0:00:00.439) 0:04:48.243 ******** 2026-04-04 00:54:29.909262 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.909269 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.909276 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.909283 | orchestrator | 2026-04-04 00:54:29.909290 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-04 00:54:29.909297 | orchestrator | Saturday 04 April 2026 00:53:10 +0000 (0:00:01.376) 0:04:49.620 ******** 2026-04-04 00:54:29.909304 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.909310 | orchestrator | 2026-04-04 00:54:29.909317 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-04 00:54:29.909324 | orchestrator | Saturday 04 April 2026 00:53:12 +0000 (0:00:01.689) 0:04:51.309 ******** 2026-04-04 00:54:29.909343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 00:54:29.909351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 00:54:29.909397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 00:54:29.909478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.909525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.909548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.909567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.909586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:54:29.909643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.909655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909695 | orchestrator | 2026-04-04 00:54:29.909702 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-04 00:54:29.909709 | orchestrator | Saturday 04 April 2026 00:53:16 +0000 (0:00:04.468) 0:04:55.777 ******** 2026-04-04 00:54:29.909719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 00:54:29.909727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 00:54:29.909747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.909817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.909825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.909871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909879 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.909886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.909898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.909960 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.909968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 00:54:29.909980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:54:29.909988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.909996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.910008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.910047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:54:29.910061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-04 00:54:29.910068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.910076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:54:29.910081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:54:29.910085 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910089 | orchestrator | 2026-04-04 00:54:29.910094 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-04 00:54:29.910101 | orchestrator | Saturday 04 April 2026 00:53:18 +0000 (0:00:01.522) 0:04:57.300 ******** 2026-04-04 00:54:29.910105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910130 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910151 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-04 00:54:29.910164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-04 00:54:29.910177 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910181 | orchestrator | 2026-04-04 00:54:29.910185 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-04 00:54:29.910193 | orchestrator | Saturday 04 April 2026 00:53:19 +0000 (0:00:00.985) 0:04:58.286 ******** 2026-04-04 00:54:29.910197 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910201 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910205 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910210 | orchestrator | 2026-04-04 00:54:29.910214 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-04 00:54:29.910218 | orchestrator | Saturday 04 April 2026 00:53:19 +0000 (0:00:00.423) 0:04:58.709 ******** 2026-04-04 00:54:29.910224 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910228 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910232 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910237 | orchestrator | 2026-04-04 00:54:29.910240 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-04 00:54:29.910244 | orchestrator | Saturday 04 April 2026 00:53:20 +0000 (0:00:01.352) 0:05:00.062 ******** 2026-04-04 00:54:29.910248 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.910252 | orchestrator | 2026-04-04 00:54:29.910256 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-04 00:54:29.910259 | orchestrator | Saturday 04 April 2026 00:53:22 +0000 (0:00:01.660) 0:05:01.723 ******** 2026-04-04 00:54:29.910263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:54:29.910268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:54:29.910274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:54:29.910281 | orchestrator | 2026-04-04 00:54:29.910285 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-04 00:54:29.910289 | orchestrator | Saturday 04 April 2026 00:53:25 +0000 (0:00:02.597) 0:05:04.320 ******** 2026-04-04 00:54:29.910295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:54:29.910299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:54:29.910303 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910307 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:54:29.910315 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910319 | orchestrator | 2026-04-04 00:54:29.910322 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-04 00:54:29.910326 | orchestrator | Saturday 04 April 2026 00:53:25 +0000 (0:00:00.380) 0:05:04.701 ******** 2026-04-04 00:54:29.910330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:54:29.910338 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:54:29.910345 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:54:29.910355 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910359 | orchestrator | 2026-04-04 00:54:29.910363 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-04 00:54:29.910366 | orchestrator | Saturday 04 April 2026 00:53:26 +0000 (0:00:00.899) 0:05:05.600 ******** 2026-04-04 00:54:29.910370 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910374 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910378 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910381 | orchestrator | 2026-04-04 00:54:29.910385 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-04 00:54:29.910389 | orchestrator | Saturday 04 April 2026 00:53:26 +0000 (0:00:00.452) 0:05:06.053 ******** 2026-04-04 00:54:29.910393 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910397 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910400 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910404 | orchestrator | 2026-04-04 00:54:29.910408 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-04 00:54:29.910412 | orchestrator | Saturday 04 April 2026 00:53:28 +0000 (0:00:01.342) 0:05:07.395 ******** 2026-04-04 00:54:29.910417 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.910421 | orchestrator | 2026-04-04 00:54:29.910425 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-04 00:54:29.910429 | orchestrator | Saturday 04 April 2026 00:53:29 +0000 (0:00:01.667) 0:05:09.063 ******** 2026-04-04 00:54:29.910433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-04 00:54:29.910437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-04 00:54:29.910446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-04 00:54:29.910453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.910457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.910461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 00:54:29.910468 | orchestrator | 2026-04-04 00:54:29.910472 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-04 00:54:29.910476 | orchestrator | Saturday 04 April 2026 00:53:35 +0000 (0:00:05.318) 0:05:14.381 ******** 2026-04-04 00:54:29.910482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-04 00:54:29.910488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.910492 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-04 00:54:29.910501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.910507 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-04 00:54:29.910521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 00:54:29.910525 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910529 | orchestrator | 2026-04-04 00:54:29.910533 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-04 00:54:29.910537 | orchestrator | Saturday 04 April 2026 00:53:35 +0000 (0:00:00.547) 0:05:14.928 ******** 2026-04-04 00:54:29.910540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910559 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910579 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-04 00:54:29.910592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-04 00:54:29.910600 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910604 | orchestrator | 2026-04-04 00:54:29.910610 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-04 00:54:29.910614 | orchestrator | Saturday 04 April 2026 00:53:36 +0000 (0:00:00.810) 0:05:15.738 ******** 2026-04-04 00:54:29.910617 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.910621 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.910625 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.910628 | orchestrator | 2026-04-04 00:54:29.910632 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-04 00:54:29.910636 | orchestrator | Saturday 04 April 2026 00:53:38 +0000 (0:00:01.399) 0:05:17.138 ******** 2026-04-04 00:54:29.910640 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.910644 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.910647 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.910651 | orchestrator | 2026-04-04 00:54:29.910655 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-04 00:54:29.910659 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:02.066) 0:05:19.204 ******** 2026-04-04 00:54:29.910665 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910669 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910672 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910676 | orchestrator | 2026-04-04 00:54:29.910680 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-04 00:54:29.910684 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.314) 0:05:19.519 ******** 2026-04-04 00:54:29.910687 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910691 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910695 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910699 | orchestrator | 2026-04-04 00:54:29.910702 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-04 00:54:29.910706 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.308) 0:05:19.827 ******** 2026-04-04 00:54:29.910710 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910714 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910717 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910721 | orchestrator | 2026-04-04 00:54:29.910725 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-04 00:54:29.910729 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.275) 0:05:20.102 ******** 2026-04-04 00:54:29.910732 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910736 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910740 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910743 | orchestrator | 2026-04-04 00:54:29.910747 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-04 00:54:29.910751 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:00.556) 0:05:20.659 ******** 2026-04-04 00:54:29.910755 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910758 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910762 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910766 | orchestrator | 2026-04-04 00:54:29.910770 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-04 00:54:29.910773 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:00.301) 0:05:20.960 ******** 2026-04-04 00:54:29.910777 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:29.910781 | orchestrator | 2026-04-04 00:54:29.910785 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-04 00:54:29.910789 | orchestrator | Saturday 04 April 2026 00:53:43 +0000 (0:00:01.768) 0:05:22.729 ******** 2026-04-04 00:54:29.910793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:54:29.910824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.910828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.910833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:54:29.910840 | orchestrator | 2026-04-04 00:54:29.910844 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-04 00:54:29.910848 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:02.839) 0:05:25.568 ******** 2026-04-04 00:54:29.910851 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:54:29.910855 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.910859 | orchestrator | } 2026-04-04 00:54:29.910863 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:54:29.910867 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.910871 | orchestrator | } 2026-04-04 00:54:29.910874 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:54:29.910878 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:54:29.910882 | orchestrator | } 2026-04-04 00:54:29.910886 | orchestrator | 2026-04-04 00:54:29.910891 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:54:29.910895 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:00.320) 0:05:25.889 ******** 2026-04-04 00:54:29.910899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.910903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.910907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.910911 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.910915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.910919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.910941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.910948 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.910957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:54:29.910963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:54:29.910967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:54:29.910971 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.910975 | orchestrator | 2026-04-04 00:54:29.910979 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-04 00:54:29.910983 | orchestrator | Saturday 04 April 2026 00:53:47 +0000 (0:00:01.242) 0:05:27.132 ******** 2026-04-04 00:54:29.910987 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.910991 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.910994 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.910998 | orchestrator | 2026-04-04 00:54:29.911002 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-04 00:54:29.911006 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.634) 0:05:27.767 ******** 2026-04-04 00:54:29.911009 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911013 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911017 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911021 | orchestrator | 2026-04-04 00:54:29.911024 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-04 00:54:29.911028 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.294) 0:05:28.061 ******** 2026-04-04 00:54:29.911032 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911035 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911042 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911046 | orchestrator | 2026-04-04 00:54:29.911050 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-04 00:54:29.911053 | orchestrator | Saturday 04 April 2026 00:53:49 +0000 (0:00:01.069) 0:05:29.131 ******** 2026-04-04 00:54:29.911057 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911061 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911065 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911068 | orchestrator | 2026-04-04 00:54:29.911072 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-04 00:54:29.911076 | orchestrator | Saturday 04 April 2026 00:53:50 +0000 (0:00:00.830) 0:05:29.961 ******** 2026-04-04 00:54:29.911080 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911083 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911087 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911091 | orchestrator | 2026-04-04 00:54:29.911095 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-04 00:54:29.911098 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.767) 0:05:30.728 ******** 2026-04-04 00:54:29.911102 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.911106 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.911109 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.911113 | orchestrator | 2026-04-04 00:54:29.911117 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-04 00:54:29.911121 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:07.996) 0:05:38.724 ******** 2026-04-04 00:54:29.911124 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911128 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911134 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911137 | orchestrator | 2026-04-04 00:54:29.911141 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-04 00:54:29.911145 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:01.044) 0:05:39.769 ******** 2026-04-04 00:54:29.911149 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.911153 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.911156 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.911160 | orchestrator | 2026-04-04 00:54:29.911164 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-04 00:54:29.911168 | orchestrator | Saturday 04 April 2026 00:54:08 +0000 (0:00:07.742) 0:05:47.512 ******** 2026-04-04 00:54:29.911171 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911175 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911179 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911182 | orchestrator | 2026-04-04 00:54:29.911186 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-04 00:54:29.911190 | orchestrator | Saturday 04 April 2026 00:54:12 +0000 (0:00:03.723) 0:05:51.235 ******** 2026-04-04 00:54:29.911194 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:29.911198 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:29.911202 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:29.911209 | orchestrator | 2026-04-04 00:54:29.911218 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-04 00:54:29.911224 | orchestrator | Saturday 04 April 2026 00:54:21 +0000 (0:00:09.332) 0:06:00.567 ******** 2026-04-04 00:54:29.911230 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911236 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911242 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911249 | orchestrator | 2026-04-04 00:54:29.911255 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-04 00:54:29.911262 | orchestrator | Saturday 04 April 2026 00:54:22 +0000 (0:00:00.642) 0:06:01.210 ******** 2026-04-04 00:54:29.911268 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911275 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911281 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911287 | orchestrator | 2026-04-04 00:54:29.911294 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-04 00:54:29.911305 | orchestrator | Saturday 04 April 2026 00:54:22 +0000 (0:00:00.357) 0:06:01.568 ******** 2026-04-04 00:54:29.911310 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911316 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911321 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911326 | orchestrator | 2026-04-04 00:54:29.911333 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-04 00:54:29.911338 | orchestrator | Saturday 04 April 2026 00:54:22 +0000 (0:00:00.353) 0:06:01.921 ******** 2026-04-04 00:54:29.911344 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911350 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911356 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911361 | orchestrator | 2026-04-04 00:54:29.911367 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-04 00:54:29.911373 | orchestrator | Saturday 04 April 2026 00:54:23 +0000 (0:00:00.334) 0:06:02.256 ******** 2026-04-04 00:54:29.911379 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911385 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911390 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911395 | orchestrator | 2026-04-04 00:54:29.911401 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-04 00:54:29.911408 | orchestrator | Saturday 04 April 2026 00:54:23 +0000 (0:00:00.662) 0:06:02.919 ******** 2026-04-04 00:54:29.911414 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:29.911420 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:29.911426 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:29.911433 | orchestrator | 2026-04-04 00:54:29.911439 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-04 00:54:29.911445 | orchestrator | Saturday 04 April 2026 00:54:24 +0000 (0:00:00.355) 0:06:03.274 ******** 2026-04-04 00:54:29.911452 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911458 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911464 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911471 | orchestrator | 2026-04-04 00:54:29.911478 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-04 00:54:29.911484 | orchestrator | Saturday 04 April 2026 00:54:25 +0000 (0:00:00.950) 0:06:04.225 ******** 2026-04-04 00:54:29.911491 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:29.911497 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:29.911503 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:29.911509 | orchestrator | 2026-04-04 00:54:29.911516 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:54:29.911523 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-04 00:54:29.911529 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-04 00:54:29.911536 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-04 00:54:29.911542 | orchestrator | 2026-04-04 00:54:29.911549 | orchestrator | 2026-04-04 00:54:29.911555 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:54:29.911561 | orchestrator | Saturday 04 April 2026 00:54:26 +0000 (0:00:00.972) 0:06:05.197 ******** 2026-04-04 00:54:29.911568 | orchestrator | =============================================================================== 2026-04-04 00:54:29.911575 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.33s 2026-04-04 00:54:29.911581 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.00s 2026-04-04 00:54:29.911587 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.74s 2026-04-04 00:54:29.911598 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.59s 2026-04-04 00:54:29.911609 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.64s 2026-04-04 00:54:29.911616 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.32s 2026-04-04 00:54:29.911622 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.09s 2026-04-04 00:54:29.911628 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.98s 2026-04-04 00:54:29.911635 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.91s 2026-04-04 00:54:29.911641 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.80s 2026-04-04 00:54:29.911647 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.78s 2026-04-04 00:54:29.911653 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.78s 2026-04-04 00:54:29.911660 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2026-04-04 00:54:29.911666 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.36s 2026-04-04 00:54:29.911676 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.02s 2026-04-04 00:54:29.911683 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.89s 2026-04-04 00:54:29.911689 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.86s 2026-04-04 00:54:29.911695 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.83s 2026-04-04 00:54:29.911701 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.72s 2026-04-04 00:54:29.911707 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 3.49s 2026-04-04 00:54:29.911713 | orchestrator | 2026-04-04 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:32.928327 | orchestrator | 2026-04-04 00:54:32 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:32.929286 | orchestrator | 2026-04-04 00:54:32 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:32.930485 | orchestrator | 2026-04-04 00:54:32 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:32.930738 | orchestrator | 2026-04-04 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:35.962753 | orchestrator | 2026-04-04 00:54:35 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:35.965567 | orchestrator | 2026-04-04 00:54:35 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:35.966269 | orchestrator | 2026-04-04 00:54:35 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:35.966296 | orchestrator | 2026-04-04 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:38.999661 | orchestrator | 2026-04-04 00:54:38 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:39.000627 | orchestrator | 2026-04-04 00:54:39 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:39.001703 | orchestrator | 2026-04-04 00:54:39 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:39.001728 | orchestrator | 2026-04-04 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:42.031829 | orchestrator | 2026-04-04 00:54:42 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:42.032202 | orchestrator | 2026-04-04 00:54:42 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:42.033280 | orchestrator | 2026-04-04 00:54:42 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:42.034607 | orchestrator | 2026-04-04 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:45.064714 | orchestrator | 2026-04-04 00:54:45 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:45.067759 | orchestrator | 2026-04-04 00:54:45 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:45.068525 | orchestrator | 2026-04-04 00:54:45 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:45.068544 | orchestrator | 2026-04-04 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:48.091580 | orchestrator | 2026-04-04 00:54:48 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:48.092175 | orchestrator | 2026-04-04 00:54:48 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:48.095314 | orchestrator | 2026-04-04 00:54:48 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:48.095362 | orchestrator | 2026-04-04 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:51.121392 | orchestrator | 2026-04-04 00:54:51 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:51.121859 | orchestrator | 2026-04-04 00:54:51 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:51.123041 | orchestrator | 2026-04-04 00:54:51 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:51.123075 | orchestrator | 2026-04-04 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:54.146642 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:54.147604 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:54.149736 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:54.149836 | orchestrator | 2026-04-04 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:57.177161 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:54:57.177208 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:54:57.177214 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:54:57.177223 | orchestrator | 2026-04-04 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:00.215009 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:00.215061 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:00.224268 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:00.224327 | orchestrator | 2026-04-04 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:03.268257 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:03.269984 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:03.271102 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:03.271128 | orchestrator | 2026-04-04 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:06.307171 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:06.308230 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:06.310382 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:06.310432 | orchestrator | 2026-04-04 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:09.341444 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:09.343321 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:09.345637 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:09.345667 | orchestrator | 2026-04-04 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:12.390538 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:12.392406 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:12.394493 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:12.394550 | orchestrator | 2026-04-04 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:15.439399 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:15.442962 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:15.445787 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:15.445926 | orchestrator | 2026-04-04 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:18.484523 | orchestrator | 2026-04-04 00:55:18 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:18.487264 | orchestrator | 2026-04-04 00:55:18 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:18.488620 | orchestrator | 2026-04-04 00:55:18 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:18.488669 | orchestrator | 2026-04-04 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:21.531648 | orchestrator | 2026-04-04 00:55:21 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:21.531976 | orchestrator | 2026-04-04 00:55:21 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:21.533070 | orchestrator | 2026-04-04 00:55:21 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:21.533093 | orchestrator | 2026-04-04 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:24.579582 | orchestrator | 2026-04-04 00:55:24 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:24.581190 | orchestrator | 2026-04-04 00:55:24 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:24.582978 | orchestrator | 2026-04-04 00:55:24 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:24.583013 | orchestrator | 2026-04-04 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:27.626572 | orchestrator | 2026-04-04 00:55:27 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:27.626999 | orchestrator | 2026-04-04 00:55:27 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:27.627976 | orchestrator | 2026-04-04 00:55:27 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:27.628014 | orchestrator | 2026-04-04 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:30.671315 | orchestrator | 2026-04-04 00:55:30 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:30.672495 | orchestrator | 2026-04-04 00:55:30 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:30.673439 | orchestrator | 2026-04-04 00:55:30 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state STARTED 2026-04-04 00:55:30.673481 | orchestrator | 2026-04-04 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:33.724240 | orchestrator | 2026-04-04 00:55:33 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:33.729953 | orchestrator | 2026-04-04 00:55:33 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:33.731394 | orchestrator | 2026-04-04 00:55:33 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:33.737186 | orchestrator | 2026-04-04 00:55:33 | INFO  | Task 8ee163ae-bd62-42f7-b681-5855b26add7d is in state SUCCESS 2026-04-04 00:55:33.739157 | orchestrator | 2026-04-04 00:55:33.739218 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:55:33.739235 | orchestrator | 2.16.14 2026-04-04 00:55:33.739246 | orchestrator | 2026-04-04 00:55:33.739256 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-04 00:55:33.739268 | orchestrator | 2026-04-04 00:55:33.739279 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-04 00:55:33.739290 | orchestrator | Saturday 04 April 2026 00:45:34 +0000 (0:00:00.723) 0:00:00.723 ******** 2026-04-04 00:55:33.739302 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.739314 | orchestrator | 2026-04-04 00:55:33.739325 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-04 00:55:33.739332 | orchestrator | Saturday 04 April 2026 00:45:35 +0000 (0:00:01.169) 0:00:01.892 ******** 2026-04-04 00:55:33.739338 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739345 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739352 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739358 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739364 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739434 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739440 | orchestrator | 2026-04-04 00:55:33.739446 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-04 00:55:33.739451 | orchestrator | Saturday 04 April 2026 00:45:37 +0000 (0:00:01.756) 0:00:03.649 ******** 2026-04-04 00:55:33.739457 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739462 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739467 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739473 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739478 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739483 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739489 | orchestrator | 2026-04-04 00:55:33.739494 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-04 00:55:33.739499 | orchestrator | Saturday 04 April 2026 00:45:38 +0000 (0:00:00.606) 0:00:04.255 ******** 2026-04-04 00:55:33.739505 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739510 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739515 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739537 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739543 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739548 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739554 | orchestrator | 2026-04-04 00:55:33.739559 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-04 00:55:33.739565 | orchestrator | Saturday 04 April 2026 00:45:39 +0000 (0:00:01.031) 0:00:05.286 ******** 2026-04-04 00:55:33.739570 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739729 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739734 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739740 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739745 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739750 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739756 | orchestrator | 2026-04-04 00:55:33.739761 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-04 00:55:33.739767 | orchestrator | Saturday 04 April 2026 00:45:39 +0000 (0:00:00.822) 0:00:06.109 ******** 2026-04-04 00:55:33.739782 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739787 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739793 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739798 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739804 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739809 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739814 | orchestrator | 2026-04-04 00:55:33.739841 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-04 00:55:33.739848 | orchestrator | Saturday 04 April 2026 00:45:40 +0000 (0:00:01.037) 0:00:07.147 ******** 2026-04-04 00:55:33.739854 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739859 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739865 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.739870 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.739875 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.739881 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.739886 | orchestrator | 2026-04-04 00:55:33.739892 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-04 00:55:33.739898 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:01.201) 0:00:08.348 ******** 2026-04-04 00:55:33.739903 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.739909 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.739915 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.739920 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.739926 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.739931 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.739936 | orchestrator | 2026-04-04 00:55:33.739976 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-04 00:55:33.739982 | orchestrator | Saturday 04 April 2026 00:45:42 +0000 (0:00:00.683) 0:00:09.031 ******** 2026-04-04 00:55:33.739988 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.739993 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.739999 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.740004 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.740010 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.740015 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.740021 | orchestrator | 2026-04-04 00:55:33.740026 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-04 00:55:33.740031 | orchestrator | Saturday 04 April 2026 00:45:43 +0000 (0:00:00.885) 0:00:09.917 ******** 2026-04-04 00:55:33.740037 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:33.740043 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.740048 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.740053 | orchestrator | 2026-04-04 00:55:33.740059 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-04 00:55:33.740064 | orchestrator | Saturday 04 April 2026 00:45:44 +0000 (0:00:00.613) 0:00:10.530 ******** 2026-04-04 00:55:33.740077 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.740083 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.740088 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.740104 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.740110 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.740116 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.740121 | orchestrator | 2026-04-04 00:55:33.740126 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-04 00:55:33.740161 | orchestrator | Saturday 04 April 2026 00:45:45 +0000 (0:00:01.326) 0:00:11.857 ******** 2026-04-04 00:55:33.740168 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:33.740174 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.740179 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.740184 | orchestrator | 2026-04-04 00:55:33.740190 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-04 00:55:33.740195 | orchestrator | Saturday 04 April 2026 00:45:48 +0000 (0:00:03.127) 0:00:14.985 ******** 2026-04-04 00:55:33.740201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:55:33.740207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:55:33.740212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:55:33.740217 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.740223 | orchestrator | 2026-04-04 00:55:33.740228 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-04 00:55:33.740234 | orchestrator | Saturday 04 April 2026 00:45:49 +0000 (0:00:00.526) 0:00:15.512 ******** 2026-04-04 00:55:33.740241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740259 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.740265 | orchestrator | 2026-04-04 00:55:33.740270 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-04 00:55:33.740276 | orchestrator | Saturday 04 April 2026 00:45:49 +0000 (0:00:00.702) 0:00:16.214 ******** 2026-04-04 00:55:33.740287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740695 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.740705 | orchestrator | 2026-04-04 00:55:33.740714 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-04 00:55:33.740724 | orchestrator | Saturday 04 April 2026 00:45:50 +0000 (0:00:00.222) 0:00:16.436 ******** 2026-04-04 00:55:33.740796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-04 00:45:46.469908', 'end': '2026-04-04 00:45:46.571995', 'delta': '0:00:00.102087', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-04 00:45:47.797729', 'end': '2026-04-04 00:45:47.893769', 'delta': '0:00:00.096040', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-04 00:45:48.475664', 'end': '2026-04-04 00:45:48.560211', 'delta': '0:00:00.084547', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.740846 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.740856 | orchestrator | 2026-04-04 00:55:33.740969 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-04 00:55:33.740976 | orchestrator | Saturday 04 April 2026 00:45:50 +0000 (0:00:00.535) 0:00:16.972 ******** 2026-04-04 00:55:33.740982 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.740993 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.741002 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.741010 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.741025 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.741244 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.741264 | orchestrator | 2026-04-04 00:55:33.741273 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-04 00:55:33.741290 | orchestrator | Saturday 04 April 2026 00:45:52 +0000 (0:00:01.333) 0:00:18.305 ******** 2026-04-04 00:55:33.741301 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.741310 | orchestrator | 2026-04-04 00:55:33.741320 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-04 00:55:33.741329 | orchestrator | Saturday 04 April 2026 00:45:52 +0000 (0:00:00.803) 0:00:19.109 ******** 2026-04-04 00:55:33.741349 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.741359 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.741369 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.741378 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.741388 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.741398 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.741489 | orchestrator | 2026-04-04 00:55:33.741502 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-04 00:55:33.741511 | orchestrator | Saturday 04 April 2026 00:45:54 +0000 (0:00:01.637) 0:00:20.746 ******** 2026-04-04 00:55:33.741520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.741529 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.741539 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.741548 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.741558 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.741568 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.741577 | orchestrator | 2026-04-04 00:55:33.741586 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:55:33.741596 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:01.703) 0:00:22.449 ******** 2026-04-04 00:55:33.741605 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.741615 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.741624 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.741634 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.741643 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.741956 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.741975 | orchestrator | 2026-04-04 00:55:33.741985 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-04 00:55:33.741994 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:00.729) 0:00:23.179 ******** 2026-04-04 00:55:33.742004 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742051 | orchestrator | 2026-04-04 00:55:33.742060 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-04 00:55:33.742066 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.158) 0:00:23.338 ******** 2026-04-04 00:55:33.742073 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742083 | orchestrator | 2026-04-04 00:55:33.742093 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:55:33.742103 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.191) 0:00:23.529 ******** 2026-04-04 00:55:33.742112 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742121 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.742130 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.742246 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.742260 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.742271 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.742280 | orchestrator | 2026-04-04 00:55:33.742290 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-04 00:55:33.742299 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.530) 0:00:24.060 ******** 2026-04-04 00:55:33.742309 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742319 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.742329 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.742338 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.742348 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.742357 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.742367 | orchestrator | 2026-04-04 00:55:33.742377 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-04 00:55:33.742386 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:01.078) 0:00:25.138 ******** 2026-04-04 00:55:33.742395 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742404 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.742426 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.742436 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.742446 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.742455 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.742464 | orchestrator | 2026-04-04 00:55:33.742473 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-04 00:55:33.742482 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.915) 0:00:26.054 ******** 2026-04-04 00:55:33.742492 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.742502 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.742511 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.742948 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.742958 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.742968 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.742977 | orchestrator | 2026-04-04 00:55:33.742987 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-04 00:55:33.742998 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.621) 0:00:26.675 ******** 2026-04-04 00:55:33.743007 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.743017 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.743026 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.743036 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.743045 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.743055 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.743064 | orchestrator | 2026-04-04 00:55:33.743073 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-04 00:55:33.743083 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.415) 0:00:27.091 ******** 2026-04-04 00:55:33.743093 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.743103 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.743113 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.743391 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.743400 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.743406 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.743412 | orchestrator | 2026-04-04 00:55:33.743419 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-04 00:55:33.743425 | orchestrator | Saturday 04 April 2026 00:46:01 +0000 (0:00:00.601) 0:00:27.693 ******** 2026-04-04 00:55:33.743430 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.743436 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.743441 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.743447 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.743452 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.743458 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.743463 | orchestrator | 2026-04-04 00:55:33.743469 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-04 00:55:33.743474 | orchestrator | Saturday 04 April 2026 00:46:02 +0000 (0:00:00.553) 0:00:28.246 ******** 2026-04-04 00:55:33.743482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3', 'dm-uuid-LVM-wozvLOh456sUfn9PqWV2oYBmxucNglfIsRj4iQcmeGu13Yo668Xa1ie8B5Vp2zNd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf', 'dm-uuid-LVM-3GO6ulA2UCr79XQtMUmeGCQVwsfTCN3Q1E6l2EmACUpV8mUHmqxWcqJe2RaqaMTV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.743753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a', 'dm-uuid-LVM-pgeNJmKNp28pjV3fx86BCWc8wX4QALTFGsYLqbIr0gemBAC5etWKyA4QhGr3xbbZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LPDpNU-e6eu-lfRM-x6KR-689B-8pfF-RCrCE6', 'scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c', 'scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.743771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b', 'dm-uuid-LVM-Zgd0Gt58TKykaDOn90TkpYikcAaeJTdGNTvvZQdWx20IpbH2fKcdqHJSe79cISTu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.743781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xv41ba-sM5C-aoVy-fzVJ-f2Kt-Dddx-6eEUlG', 'scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10', 'scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.743845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903', 'scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744458 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.744467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q6wpoU-SZHW-edcv-Crdi-vP9G-hz0J-rB1IPk', 'scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a', 'scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3BfRvu-NPKS-4GHk-tgZa-LaI8-IdqC-seyFLh', 'scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca', 'scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338', 'scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae', 'dm-uuid-LVM-I9mvQrhzD9WRmt2aKBMUg5i54orKM11aDq10QeDsfxP8JRu4O5JDaP1Hg8Rxd7hg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6', 'dm-uuid-LVM-e5jS3yC23cZhqTNE2Gedcepj8x5rLXlu5xWcQfH2U9iwJYpApQDbI8mCzpWfQznY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744633 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.744642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3342fG-blzy-o6fy-UO4K-31rX-ThXL-EiYsBj', 'scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434', 'scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.744971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.744980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tb2xf4-QKmJ-XvbT-1Uvb-cQ8T-LMwd-9FcBoK', 'scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364', 'scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0', 'scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part1', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part14', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part15', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part16', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745245 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.745259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745276 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.745284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part1', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part14', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part15', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part16', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745398 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.745407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:33.745578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part1', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part14', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part15', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part16', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:33.745660 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.745670 | orchestrator | 2026-04-04 00:55:33.745680 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-04 00:55:33.745690 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:01.295) 0:00:29.541 ******** 2026-04-04 00:55:33.745699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3', 'dm-uuid-LVM-wozvLOh456sUfn9PqWV2oYBmxucNglfIsRj4iQcmeGu13Yo668Xa1ie8B5Vp2zNd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf', 'dm-uuid-LVM-3GO6ulA2UCr79XQtMUmeGCQVwsfTCN3Q1E6l2EmACUpV8mUHmqxWcqJe2RaqaMTV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.745956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LPDpNU-e6eu-lfRM-x6KR-689B-8pfF-RCrCE6', 'scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c', 'scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xv41ba-sM5C-aoVy-fzVJ-f2Kt-Dddx-6eEUlG', 'scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10', 'scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903', 'scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a', 'dm-uuid-LVM-pgeNJmKNp28pjV3fx86BCWc8wX4QALTFGsYLqbIr0gemBAC5etWKyA4QhGr3xbbZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b', 'dm-uuid-LVM-Zgd0Gt58TKykaDOn90TkpYikcAaeJTdGNTvvZQdWx20IpbH2fKcdqHJSe79cISTu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746262 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.746273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746495 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae', 'dm-uuid-LVM-I9mvQrhzD9WRmt2aKBMUg5i54orKM11aDq10QeDsfxP8JRu4O5JDaP1Hg8Rxd7hg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q6wpoU-SZHW-edcv-Crdi-vP9G-hz0J-rB1IPk', 'scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a', 'scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6', 'dm-uuid-LVM-e5jS3yC23cZhqTNE2Gedcepj8x5rLXlu5xWcQfH2U9iwJYpApQDbI8mCzpWfQznY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746612 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3BfRvu-NPKS-4GHk-tgZa-LaI8-IdqC-seyFLh', 'scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca', 'scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338', 'scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746916 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3342fG-blzy-o6fy-UO4K-31rX-ThXL-EiYsBj', 'scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434', 'scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.746946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747012 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tb2xf4-QKmJ-XvbT-1Uvb-cQ8T-LMwd-9FcBoK', 'scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364', 'scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747040 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747050 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.747099 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0', 'scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747234 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part1', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part14', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part15', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part16', 'scsi-SQEMU_QEMU_HARDDISK_a02f1b50-e748-4ffa-92fc-a34c46f12dd0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747345 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747354 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747364 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747378 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747389 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747398 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747482 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747497 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part1', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part14', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part15', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part16', 'scsi-SQEMU_QEMU_HARDDISK_cd83c7e7-5e97-436e-8e2f-fc883acebe13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747514 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747591 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.747605 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.747615 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.747623 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747631 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747640 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747654 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747662 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747678 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747743 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747757 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747782 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part1', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part14', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part15', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part16', 'scsi-SQEMU_QEMU_HARDDISK_863798db-c475-4907-865f-d751361d3bd3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747799 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:33.747809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.747867 | orchestrator | 2026-04-04 00:55:33.747960 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-04 00:55:33.747976 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:01.079) 0:00:30.621 ******** 2026-04-04 00:55:33.747986 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.747995 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.748004 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.748013 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.748022 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.748031 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.748040 | orchestrator | 2026-04-04 00:55:33.748048 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-04 00:55:33.748057 | orchestrator | Saturday 04 April 2026 00:46:05 +0000 (0:00:01.276) 0:00:31.898 ******** 2026-04-04 00:55:33.748066 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.748075 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.748085 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.748106 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.748114 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.748131 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.748140 | orchestrator | 2026-04-04 00:55:33.748149 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:55:33.748158 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:00.827) 0:00:32.725 ******** 2026-04-04 00:55:33.748167 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.748189 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.748198 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.748207 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748215 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.748224 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.748235 | orchestrator | 2026-04-04 00:55:33.748244 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:55:33.748254 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:01.717) 0:00:34.443 ******** 2026-04-04 00:55:33.748262 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.748271 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.748279 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.748288 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748297 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.748306 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.748315 | orchestrator | 2026-04-04 00:55:33.748325 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:55:33.748335 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:00.880) 0:00:35.324 ******** 2026-04-04 00:55:33.748344 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.748353 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.748373 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.748383 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748391 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.748398 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.748406 | orchestrator | 2026-04-04 00:55:33.748413 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:55:33.748421 | orchestrator | Saturday 04 April 2026 00:46:10 +0000 (0:00:01.042) 0:00:36.367 ******** 2026-04-04 00:55:33.748429 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.748437 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.748445 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.748460 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748468 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.748476 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.748485 | orchestrator | 2026-04-04 00:55:33.748492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-04 00:55:33.748501 | orchestrator | Saturday 04 April 2026 00:46:10 +0000 (0:00:00.818) 0:00:37.186 ******** 2026-04-04 00:55:33.748508 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-04 00:55:33.748517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-04 00:55:33.748526 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-04 00:55:33.748534 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-04 00:55:33.748542 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:55:33.748549 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-04 00:55:33.748557 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-04 00:55:33.748566 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-04 00:55:33.748575 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-04 00:55:33.748584 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-04 00:55:33.748593 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-04 00:55:33.748601 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-04 00:55:33.748611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-04 00:55:33.748620 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-04 00:55:33.748629 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-04 00:55:33.748638 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 00:55:33.748646 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-04 00:55:33.748655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 00:55:33.748663 | orchestrator | 2026-04-04 00:55:33.748672 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-04 00:55:33.748682 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:03.575) 0:00:40.761 ******** 2026-04-04 00:55:33.748689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:55:33.748697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:55:33.748705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:55:33.748713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:55:33.748722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:55:33.748731 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:55:33.748740 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:55:33.748748 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:55:33.748800 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:55:33.748810 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.748838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:55:33.748847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:55:33.748856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:55:33.748872 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.748881 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-04 00:55:33.748889 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-04 00:55:33.748899 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-04 00:55:33.748907 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.748916 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748924 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.748932 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-04 00:55:33.748940 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-04 00:55:33.748948 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-04 00:55:33.748956 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.748963 | orchestrator | 2026-04-04 00:55:33.748971 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-04 00:55:33.748978 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.702) 0:00:41.464 ******** 2026-04-04 00:55:33.748985 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.748992 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.749000 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.749008 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.749016 | orchestrator | 2026-04-04 00:55:33.749024 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:55:33.749033 | orchestrator | Saturday 04 April 2026 00:46:16 +0000 (0:00:01.077) 0:00:42.541 ******** 2026-04-04 00:55:33.749038 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749043 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749048 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749053 | orchestrator | 2026-04-04 00:55:33.749058 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:55:33.749062 | orchestrator | Saturday 04 April 2026 00:46:16 +0000 (0:00:00.347) 0:00:42.888 ******** 2026-04-04 00:55:33.749067 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749073 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749088 | orchestrator | 2026-04-04 00:55:33.749098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:55:33.749108 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:00.368) 0:00:43.257 ******** 2026-04-04 00:55:33.749116 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749124 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749137 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749145 | orchestrator | 2026-04-04 00:55:33.749153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:55:33.749159 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:00.364) 0:00:43.622 ******** 2026-04-04 00:55:33.749166 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.749175 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.749183 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.749191 | orchestrator | 2026-04-04 00:55:33.749199 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:55:33.749207 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:00.826) 0:00:44.448 ******** 2026-04-04 00:55:33.749215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.749223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.749231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.749238 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749246 | orchestrator | 2026-04-04 00:55:33.749254 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:55:33.749269 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.845) 0:00:45.294 ******** 2026-04-04 00:55:33.749274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.749279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.749284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.749289 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749293 | orchestrator | 2026-04-04 00:55:33.749298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:55:33.749303 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.492) 0:00:45.786 ******** 2026-04-04 00:55:33.749308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.749313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.749318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.749322 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749327 | orchestrator | 2026-04-04 00:55:33.749332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:55:33.749337 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.394) 0:00:46.181 ******** 2026-04-04 00:55:33.749342 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.749347 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.749351 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.749356 | orchestrator | 2026-04-04 00:55:33.749361 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:55:33.749369 | orchestrator | Saturday 04 April 2026 00:46:20 +0000 (0:00:00.318) 0:00:46.500 ******** 2026-04-04 00:55:33.749376 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:55:33.749384 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:55:33.749430 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:55:33.749439 | orchestrator | 2026-04-04 00:55:33.749446 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-04 00:55:33.749453 | orchestrator | Saturday 04 April 2026 00:46:20 +0000 (0:00:00.706) 0:00:47.206 ******** 2026-04-04 00:55:33.749460 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:33.749468 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.749476 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.749484 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:55:33.749492 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:55:33.749498 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:55:33.749502 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:55:33.749507 | orchestrator | 2026-04-04 00:55:33.749511 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-04 00:55:33.749516 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:00.971) 0:00:48.177 ******** 2026-04-04 00:55:33.749520 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:33.749525 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.749529 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.749534 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:55:33.749538 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:55:33.749543 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:55:33.749547 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:55:33.749561 | orchestrator | 2026-04-04 00:55:33.749566 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.749570 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:01.981) 0:00:50.158 ******** 2026-04-04 00:55:33.749575 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.749581 | orchestrator | 2026-04-04 00:55:33.749585 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.749590 | orchestrator | Saturday 04 April 2026 00:46:25 +0000 (0:00:01.160) 0:00:51.318 ******** 2026-04-04 00:55:33.749599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.749604 | orchestrator | 2026-04-04 00:55:33.749608 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.749613 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:01.153) 0:00:52.472 ******** 2026-04-04 00:55:33.749617 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749622 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749626 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749633 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.749640 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.749647 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.749657 | orchestrator | 2026-04-04 00:55:33.749666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.749673 | orchestrator | Saturday 04 April 2026 00:46:27 +0000 (0:00:01.363) 0:00:53.835 ******** 2026-04-04 00:55:33.749680 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.749687 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.749694 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.749701 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.749707 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.749714 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.749721 | orchestrator | 2026-04-04 00:55:33.749728 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.749734 | orchestrator | Saturday 04 April 2026 00:46:28 +0000 (0:00:00.881) 0:00:54.717 ******** 2026-04-04 00:55:33.749742 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.749749 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.749757 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.749763 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.749770 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.749778 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.749785 | orchestrator | 2026-04-04 00:55:33.749792 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.749799 | orchestrator | Saturday 04 April 2026 00:46:29 +0000 (0:00:00.831) 0:00:55.548 ******** 2026-04-04 00:55:33.749807 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.749814 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.749841 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.749849 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.749857 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.749864 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.749872 | orchestrator | 2026-04-04 00:55:33.749880 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.749888 | orchestrator | Saturday 04 April 2026 00:46:30 +0000 (0:00:00.759) 0:00:56.307 ******** 2026-04-04 00:55:33.749896 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749903 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749912 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749917 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.749921 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.749952 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.749966 | orchestrator | 2026-04-04 00:55:33.749970 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.749975 | orchestrator | Saturday 04 April 2026 00:46:31 +0000 (0:00:00.922) 0:00:57.230 ******** 2026-04-04 00:55:33.749979 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.749984 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.749988 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.749993 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.749997 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750002 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750006 | orchestrator | 2026-04-04 00:55:33.750011 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.750046 | orchestrator | Saturday 04 April 2026 00:46:31 +0000 (0:00:00.728) 0:00:57.958 ******** 2026-04-04 00:55:33.750054 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750062 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750070 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750078 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750086 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750094 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750099 | orchestrator | 2026-04-04 00:55:33.750104 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.750109 | orchestrator | Saturday 04 April 2026 00:46:32 +0000 (0:00:00.528) 0:00:58.486 ******** 2026-04-04 00:55:33.750113 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750118 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750122 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750127 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750132 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750136 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750140 | orchestrator | 2026-04-04 00:55:33.750145 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.750150 | orchestrator | Saturday 04 April 2026 00:46:33 +0000 (0:00:01.249) 0:00:59.736 ******** 2026-04-04 00:55:33.750154 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750159 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750166 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750173 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750181 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750188 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750196 | orchestrator | 2026-04-04 00:55:33.750204 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.750209 | orchestrator | Saturday 04 April 2026 00:46:34 +0000 (0:00:01.250) 0:01:00.986 ******** 2026-04-04 00:55:33.750214 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750218 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750223 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750227 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750232 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750236 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750241 | orchestrator | 2026-04-04 00:55:33.750246 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.750250 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:00.749) 0:01:01.736 ******** 2026-04-04 00:55:33.750255 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750263 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750268 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750273 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750277 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750282 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750286 | orchestrator | 2026-04-04 00:55:33.750291 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.750295 | orchestrator | Saturday 04 April 2026 00:46:36 +0000 (0:00:00.631) 0:01:02.368 ******** 2026-04-04 00:55:33.750300 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750309 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750313 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750317 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750322 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750326 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750331 | orchestrator | 2026-04-04 00:55:33.750335 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.750340 | orchestrator | Saturday 04 April 2026 00:46:37 +0000 (0:00:01.081) 0:01:03.449 ******** 2026-04-04 00:55:33.750345 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750349 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750354 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750358 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750363 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750367 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750372 | orchestrator | 2026-04-04 00:55:33.750376 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.750381 | orchestrator | Saturday 04 April 2026 00:46:37 +0000 (0:00:00.610) 0:01:04.060 ******** 2026-04-04 00:55:33.750385 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750390 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750394 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750399 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750403 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750408 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750412 | orchestrator | 2026-04-04 00:55:33.750417 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.750421 | orchestrator | Saturday 04 April 2026 00:46:38 +0000 (0:00:00.735) 0:01:04.796 ******** 2026-04-04 00:55:33.750426 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750430 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750435 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750439 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750444 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750448 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750453 | orchestrator | 2026-04-04 00:55:33.750457 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.750462 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:00.808) 0:01:05.605 ******** 2026-04-04 00:55:33.750466 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750471 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750475 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750480 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750506 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750511 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750516 | orchestrator | 2026-04-04 00:55:33.750520 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.750525 | orchestrator | Saturday 04 April 2026 00:46:40 +0000 (0:00:00.923) 0:01:06.528 ******** 2026-04-04 00:55:33.750529 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750534 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750538 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750543 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750548 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750552 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750557 | orchestrator | 2026-04-04 00:55:33.750561 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.750566 | orchestrator | Saturday 04 April 2026 00:46:41 +0000 (0:00:00.948) 0:01:07.477 ******** 2026-04-04 00:55:33.750570 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750575 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750579 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750584 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750588 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750597 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750602 | orchestrator | 2026-04-04 00:55:33.750606 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.750611 | orchestrator | Saturday 04 April 2026 00:46:42 +0000 (0:00:01.463) 0:01:08.941 ******** 2026-04-04 00:55:33.750615 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.750620 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.750624 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.750629 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.750633 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.750638 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.750642 | orchestrator | 2026-04-04 00:55:33.750647 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-04 00:55:33.750651 | orchestrator | Saturday 04 April 2026 00:46:44 +0000 (0:00:01.371) 0:01:10.312 ******** 2026-04-04 00:55:33.750656 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.750660 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.750665 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.750670 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.750674 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.750679 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.750683 | orchestrator | 2026-04-04 00:55:33.750688 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-04 00:55:33.750692 | orchestrator | Saturday 04 April 2026 00:46:46 +0000 (0:00:02.610) 0:01:12.923 ******** 2026-04-04 00:55:33.750697 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.750701 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.750706 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.750710 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.750715 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.750719 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.750724 | orchestrator | 2026-04-04 00:55:33.750728 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-04 00:55:33.750733 | orchestrator | Saturday 04 April 2026 00:46:48 +0000 (0:00:02.188) 0:01:15.111 ******** 2026-04-04 00:55:33.750743 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.750755 | orchestrator | 2026-04-04 00:55:33.750763 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-04 00:55:33.750770 | orchestrator | Saturday 04 April 2026 00:46:50 +0000 (0:00:01.283) 0:01:16.395 ******** 2026-04-04 00:55:33.750777 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750784 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750791 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750798 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750805 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750812 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750864 | orchestrator | 2026-04-04 00:55:33.750873 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-04 00:55:33.750880 | orchestrator | Saturday 04 April 2026 00:46:50 +0000 (0:00:00.651) 0:01:17.046 ******** 2026-04-04 00:55:33.750888 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.750896 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.750903 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.750911 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.750918 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.750922 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.750927 | orchestrator | 2026-04-04 00:55:33.750932 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-04 00:55:33.750936 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:00.969) 0:01:18.015 ******** 2026-04-04 00:55:33.750941 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750950 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750955 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750960 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750964 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750969 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:55:33.750974 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.750979 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.750983 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.750988 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.751015 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.751020 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:55:33.751025 | orchestrator | 2026-04-04 00:55:33.751029 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-04 00:55:33.751034 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:01.759) 0:01:19.774 ******** 2026-04-04 00:55:33.751038 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.751043 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.751048 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.751052 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.751057 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.751061 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.751066 | orchestrator | 2026-04-04 00:55:33.751070 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-04 00:55:33.751075 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:01.308) 0:01:21.082 ******** 2026-04-04 00:55:33.751079 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751084 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751088 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751093 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751097 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751101 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751106 | orchestrator | 2026-04-04 00:55:33.751110 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-04 00:55:33.751115 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.498) 0:01:21.581 ******** 2026-04-04 00:55:33.751120 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751124 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751129 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751133 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751137 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751142 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751146 | orchestrator | 2026-04-04 00:55:33.751151 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-04 00:55:33.751155 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.648) 0:01:22.229 ******** 2026-04-04 00:55:33.751160 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751164 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751169 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751173 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751178 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751182 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751187 | orchestrator | 2026-04-04 00:55:33.751191 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-04 00:55:33.751196 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.483) 0:01:22.712 ******** 2026-04-04 00:55:33.751205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.751210 | orchestrator | 2026-04-04 00:55:33.751218 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-04 00:55:33.751223 | orchestrator | Saturday 04 April 2026 00:46:57 +0000 (0:00:01.008) 0:01:23.720 ******** 2026-04-04 00:55:33.751227 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.751232 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.751236 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.751241 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.751245 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.751250 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.751254 | orchestrator | 2026-04-04 00:55:33.751259 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-04 00:55:33.751263 | orchestrator | Saturday 04 April 2026 00:48:10 +0000 (0:01:12.530) 0:02:36.251 ******** 2026-04-04 00:55:33.751268 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751273 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751277 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751282 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751286 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751291 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751295 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751300 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751304 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751309 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751313 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751318 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751322 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751327 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751331 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751336 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751340 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751345 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751349 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751354 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751373 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:55:33.751378 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:55:33.751383 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:55:33.751387 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751392 | orchestrator | 2026-04-04 00:55:33.751396 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-04 00:55:33.751401 | orchestrator | Saturday 04 April 2026 00:48:10 +0000 (0:00:00.571) 0:02:36.823 ******** 2026-04-04 00:55:33.751405 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751410 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751414 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751419 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751428 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751436 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751443 | orchestrator | 2026-04-04 00:55:33.751450 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-04 00:55:33.751458 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.578) 0:02:37.402 ******** 2026-04-04 00:55:33.751463 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751468 | orchestrator | 2026-04-04 00:55:33.751472 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-04 00:55:33.751477 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.139) 0:02:37.541 ******** 2026-04-04 00:55:33.751482 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751486 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751491 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751496 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751500 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751504 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751509 | orchestrator | 2026-04-04 00:55:33.751514 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-04 00:55:33.751518 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.514) 0:02:38.055 ******** 2026-04-04 00:55:33.751523 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751527 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751532 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751536 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751541 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751545 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751550 | orchestrator | 2026-04-04 00:55:33.751554 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-04 00:55:33.751559 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:00.621) 0:02:38.677 ******** 2026-04-04 00:55:33.751564 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751568 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751573 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751577 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751582 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751586 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751591 | orchestrator | 2026-04-04 00:55:33.751595 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-04 00:55:33.751603 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:00.539) 0:02:39.217 ******** 2026-04-04 00:55:33.751611 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.751618 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.751625 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.751633 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.751640 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.751649 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.751654 | orchestrator | 2026-04-04 00:55:33.751658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-04 00:55:33.751663 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:01.543) 0:02:40.761 ******** 2026-04-04 00:55:33.751668 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.751672 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.751677 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.751681 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.751686 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.751690 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.751695 | orchestrator | 2026-04-04 00:55:33.751699 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-04 00:55:33.751704 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:00.548) 0:02:41.309 ******** 2026-04-04 00:55:33.751709 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.751719 | orchestrator | 2026-04-04 00:55:33.751724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-04 00:55:33.751728 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:01.039) 0:02:42.349 ******** 2026-04-04 00:55:33.751733 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751738 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751742 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751747 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751751 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751756 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751760 | orchestrator | 2026-04-04 00:55:33.751765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-04 00:55:33.751770 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:00.568) 0:02:42.917 ******** 2026-04-04 00:55:33.751774 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751779 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751783 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751788 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751792 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751797 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751801 | orchestrator | 2026-04-04 00:55:33.751806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-04 00:55:33.751810 | orchestrator | Saturday 04 April 2026 00:48:17 +0000 (0:00:00.857) 0:02:43.774 ******** 2026-04-04 00:55:33.751815 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751837 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751864 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751870 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751874 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751879 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751883 | orchestrator | 2026-04-04 00:55:33.751888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-04 00:55:33.751892 | orchestrator | Saturday 04 April 2026 00:48:18 +0000 (0:00:00.652) 0:02:44.427 ******** 2026-04-04 00:55:33.751897 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751901 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751906 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751910 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751915 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751919 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751924 | orchestrator | 2026-04-04 00:55:33.751928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-04 00:55:33.751933 | orchestrator | Saturday 04 April 2026 00:48:19 +0000 (0:00:00.809) 0:02:45.236 ******** 2026-04-04 00:55:33.751937 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751942 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751946 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751950 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751955 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.751959 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.751964 | orchestrator | 2026-04-04 00:55:33.751968 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-04 00:55:33.751973 | orchestrator | Saturday 04 April 2026 00:48:19 +0000 (0:00:00.619) 0:02:45.855 ******** 2026-04-04 00:55:33.751977 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.751982 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.751986 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.751991 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.751995 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.752000 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.752004 | orchestrator | 2026-04-04 00:55:33.752009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-04 00:55:33.752013 | orchestrator | Saturday 04 April 2026 00:48:20 +0000 (0:00:00.876) 0:02:46.731 ******** 2026-04-04 00:55:33.752022 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.752026 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.752031 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.752035 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.752040 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.752044 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.752049 | orchestrator | 2026-04-04 00:55:33.752053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-04 00:55:33.752058 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.697) 0:02:47.428 ******** 2026-04-04 00:55:33.752062 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.752067 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.752071 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.752076 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.752080 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.752085 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.752089 | orchestrator | 2026-04-04 00:55:33.752097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-04 00:55:33.752102 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:00.655) 0:02:48.084 ******** 2026-04-04 00:55:33.752106 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.752111 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.752115 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.752120 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.752124 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.752129 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.752133 | orchestrator | 2026-04-04 00:55:33.752138 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-04 00:55:33.752142 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:01.230) 0:02:49.315 ******** 2026-04-04 00:55:33.752147 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.752152 | orchestrator | 2026-04-04 00:55:33.752156 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-04 00:55:33.752161 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:01.151) 0:02:50.467 ******** 2026-04-04 00:55:33.752165 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-04 00:55:33.752170 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-04 00:55:33.752175 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-04 00:55:33.752180 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-04 00:55:33.752184 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-04 00:55:33.752189 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-04 00:55:33.752193 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752202 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752214 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752229 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752236 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-04 00:55:33.752244 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752258 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752265 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752310 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-04 00:55:33.752318 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752333 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752340 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752348 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752355 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-04 00:55:33.752362 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752375 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752383 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752391 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752398 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-04 00:55:33.752405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752421 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-04 00:55:33.752450 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752465 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752472 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752480 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752487 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-04 00:55:33.752494 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752510 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752518 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752526 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:55:33.752545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752562 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752569 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:55:33.752599 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752607 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752615 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752622 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752639 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752647 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752654 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:55:33.752662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752671 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752676 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752685 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:55:33.752689 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752703 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752707 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752712 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:55:33.752716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752747 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752752 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752756 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-04 00:55:33.752761 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:55:33.752765 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752774 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-04 00:55:33.752779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752783 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-04 00:55:33.752788 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:55:33.752793 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-04 00:55:33.752797 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-04 00:55:33.752802 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-04 00:55:33.752806 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-04 00:55:33.752811 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-04 00:55:33.752815 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-04 00:55:33.752859 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-04 00:55:33.752866 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-04 00:55:33.752873 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-04 00:55:33.752880 | orchestrator | 2026-04-04 00:55:33.752886 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-04 00:55:33.752893 | orchestrator | Saturday 04 April 2026 00:48:30 +0000 (0:00:06.569) 0:02:57.036 ******** 2026-04-04 00:55:33.752899 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.752905 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.752912 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.752920 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.752934 | orchestrator | 2026-04-04 00:55:33.752941 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-04 00:55:33.752948 | orchestrator | Saturday 04 April 2026 00:48:31 +0000 (0:00:00.926) 0:02:57.963 ******** 2026-04-04 00:55:33.752955 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.752967 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.752974 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.752980 | orchestrator | 2026-04-04 00:55:33.752987 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-04 00:55:33.752994 | orchestrator | Saturday 04 April 2026 00:48:32 +0000 (0:00:00.831) 0:02:58.795 ******** 2026-04-04 00:55:33.753001 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753007 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753014 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753022 | orchestrator | 2026-04-04 00:55:33.753029 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-04 00:55:33.753036 | orchestrator | Saturday 04 April 2026 00:48:34 +0000 (0:00:01.589) 0:03:00.384 ******** 2026-04-04 00:55:33.753043 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.753050 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.753057 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.753064 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753071 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753078 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753084 | orchestrator | 2026-04-04 00:55:33.753091 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-04 00:55:33.753097 | orchestrator | Saturday 04 April 2026 00:48:34 +0000 (0:00:00.448) 0:03:00.833 ******** 2026-04-04 00:55:33.753104 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.753111 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.753117 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.753124 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753130 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753137 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753144 | orchestrator | 2026-04-04 00:55:33.753151 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-04 00:55:33.753157 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.500) 0:03:01.334 ******** 2026-04-04 00:55:33.753164 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753171 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753178 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753185 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753193 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753199 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753206 | orchestrator | 2026-04-04 00:55:33.753250 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-04 00:55:33.753259 | orchestrator | Saturday 04 April 2026 00:48:35 +0000 (0:00:00.795) 0:03:02.130 ******** 2026-04-04 00:55:33.753267 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753274 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753282 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753289 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753296 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753313 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753320 | orchestrator | 2026-04-04 00:55:33.753327 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-04 00:55:33.753335 | orchestrator | Saturday 04 April 2026 00:48:36 +0000 (0:00:00.545) 0:03:02.675 ******** 2026-04-04 00:55:33.753342 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753349 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753356 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753363 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753371 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753378 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753385 | orchestrator | 2026-04-04 00:55:33.753393 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-04 00:55:33.753401 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:00.714) 0:03:03.390 ******** 2026-04-04 00:55:33.753409 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753416 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753424 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753431 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753439 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753446 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753454 | orchestrator | 2026-04-04 00:55:33.753462 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-04 00:55:33.753469 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:00.786) 0:03:04.177 ******** 2026-04-04 00:55:33.753477 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753484 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753488 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753493 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753497 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753502 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753506 | orchestrator | 2026-04-04 00:55:33.753511 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-04 00:55:33.753516 | orchestrator | Saturday 04 April 2026 00:48:38 +0000 (0:00:00.655) 0:03:04.832 ******** 2026-04-04 00:55:33.753520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753524 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753529 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753533 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753538 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753542 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753547 | orchestrator | 2026-04-04 00:55:33.753551 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-04 00:55:33.753561 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:00.424) 0:03:05.257 ******** 2026-04-04 00:55:33.753565 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753570 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753574 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753579 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.753583 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.753588 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.753592 | orchestrator | 2026-04-04 00:55:33.753597 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-04 00:55:33.753602 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:01.475) 0:03:06.733 ******** 2026-04-04 00:55:33.753606 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.753611 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.753615 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.753620 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753624 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753632 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753640 | orchestrator | 2026-04-04 00:55:33.753654 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-04 00:55:33.753662 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.724) 0:03:07.458 ******** 2026-04-04 00:55:33.753669 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.753676 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.753683 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753689 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.753697 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753704 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753711 | orchestrator | 2026-04-04 00:55:33.753718 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-04 00:55:33.753725 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.733) 0:03:08.191 ******** 2026-04-04 00:55:33.753732 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753740 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753747 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753755 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753763 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753771 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753778 | orchestrator | 2026-04-04 00:55:33.753786 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-04 00:55:33.753793 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.716) 0:03:08.908 ******** 2026-04-04 00:55:33.753801 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753809 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753817 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.753842 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.753881 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.753890 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.753898 | orchestrator | 2026-04-04 00:55:33.753905 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-04 00:55:33.753914 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:00.837) 0:03:09.746 ******** 2026-04-04 00:55:33.753923 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-04 00:55:33.753932 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-04 00:55:33.753940 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.753948 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-04 00:55:33.753955 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-04 00:55:33.753962 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-04 00:55:33.753977 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-04 00:55:33.753984 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.753992 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.753999 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754006 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754041 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754050 | orchestrator | 2026-04-04 00:55:33.754058 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-04 00:55:33.754065 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:00.668) 0:03:10.414 ******** 2026-04-04 00:55:33.754073 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.754087 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.754094 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754101 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754107 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754114 | orchestrator | 2026-04-04 00:55:33.754122 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-04 00:55:33.754129 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.911) 0:03:11.325 ******** 2026-04-04 00:55:33.754136 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754143 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.754151 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.754158 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754166 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754173 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754180 | orchestrator | 2026-04-04 00:55:33.754188 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:55:33.754195 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.475) 0:03:11.801 ******** 2026-04-04 00:55:33.754202 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754208 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.754216 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.754223 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754230 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754237 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754243 | orchestrator | 2026-04-04 00:55:33.754291 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:55:33.754299 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.885) 0:03:12.686 ******** 2026-04-04 00:55:33.754306 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754313 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.754322 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.754329 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754412 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754441 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754448 | orchestrator | 2026-04-04 00:55:33.754455 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:55:33.754511 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.485) 0:03:13.172 ******** 2026-04-04 00:55:33.754521 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754528 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.754536 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.754544 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754551 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754595 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754602 | orchestrator | 2026-04-04 00:55:33.754609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:55:33.754616 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.591) 0:03:13.764 ******** 2026-04-04 00:55:33.754628 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.754637 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754645 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.754652 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754659 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.754666 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754673 | orchestrator | 2026-04-04 00:55:33.754681 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:55:33.754688 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.472) 0:03:14.236 ******** 2026-04-04 00:55:33.754696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.754704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.754712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.754719 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754727 | orchestrator | 2026-04-04 00:55:33.754734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:55:33.754742 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.375) 0:03:14.611 ******** 2026-04-04 00:55:33.754750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.754757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.754764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.754772 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754778 | orchestrator | 2026-04-04 00:55:33.754783 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:55:33.754788 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.425) 0:03:15.036 ******** 2026-04-04 00:55:33.754793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.754797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.754802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.754807 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.754813 | orchestrator | 2026-04-04 00:55:33.754838 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:55:33.754851 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.493) 0:03:15.530 ******** 2026-04-04 00:55:33.754858 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.754865 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.754872 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.754887 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754893 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754900 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754907 | orchestrator | 2026-04-04 00:55:33.754914 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:55:33.754922 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.652) 0:03:16.182 ******** 2026-04-04 00:55:33.754928 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:55:33.754936 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:55:33.754943 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-04 00:55:33.754951 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:55:33.754959 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.754966 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-04 00:55:33.754972 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.754979 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-04 00:55:33.754986 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.754993 | orchestrator | 2026-04-04 00:55:33.755001 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-04 00:55:33.755017 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:01.519) 0:03:17.701 ******** 2026-04-04 00:55:33.755024 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.755032 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.755039 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.755049 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.755054 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.755059 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.755063 | orchestrator | 2026-04-04 00:55:33.755068 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.755072 | orchestrator | Saturday 04 April 2026 00:48:53 +0000 (0:00:02.045) 0:03:19.746 ******** 2026-04-04 00:55:33.755077 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.755082 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.755086 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.755091 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.755095 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.755100 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.755104 | orchestrator | 2026-04-04 00:55:33.755109 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-04 00:55:33.755113 | orchestrator | Saturday 04 April 2026 00:48:55 +0000 (0:00:01.527) 0:03:21.274 ******** 2026-04-04 00:55:33.755118 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755122 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.755127 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.755132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.755137 | orchestrator | 2026-04-04 00:55:33.755142 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-04 00:55:33.755191 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:01.082) 0:03:22.356 ******** 2026-04-04 00:55:33.755201 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.755209 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.755216 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.755224 | orchestrator | 2026-04-04 00:55:33.755232 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-04 00:55:33.755240 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:00.327) 0:03:22.684 ******** 2026-04-04 00:55:33.755248 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.755253 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.755258 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.755265 | orchestrator | 2026-04-04 00:55:33.755273 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-04 00:55:33.755280 | orchestrator | Saturday 04 April 2026 00:48:58 +0000 (0:00:01.600) 0:03:24.285 ******** 2026-04-04 00:55:33.755288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:55:33.755296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:55:33.755304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:55:33.755312 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.755319 | orchestrator | 2026-04-04 00:55:33.755327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-04 00:55:33.755334 | orchestrator | Saturday 04 April 2026 00:48:58 +0000 (0:00:00.669) 0:03:24.955 ******** 2026-04-04 00:55:33.755342 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.755349 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.755356 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.755364 | orchestrator | 2026-04-04 00:55:33.755372 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-04 00:55:33.755380 | orchestrator | Saturday 04 April 2026 00:48:58 +0000 (0:00:00.256) 0:03:25.211 ******** 2026-04-04 00:55:33.755387 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.755395 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.755409 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.755417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.755424 | orchestrator | 2026-04-04 00:55:33.755432 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-04 00:55:33.755439 | orchestrator | Saturday 04 April 2026 00:48:59 +0000 (0:00:00.897) 0:03:26.109 ******** 2026-04-04 00:55:33.755447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.755454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.755461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.755469 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755477 | orchestrator | 2026-04-04 00:55:33.755484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-04 00:55:33.755492 | orchestrator | Saturday 04 April 2026 00:49:00 +0000 (0:00:00.360) 0:03:26.469 ******** 2026-04-04 00:55:33.755500 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755508 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.755516 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.755523 | orchestrator | 2026-04-04 00:55:33.755535 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-04 00:55:33.755543 | orchestrator | Saturday 04 April 2026 00:49:00 +0000 (0:00:00.266) 0:03:26.735 ******** 2026-04-04 00:55:33.755551 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755558 | orchestrator | 2026-04-04 00:55:33.755566 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-04 00:55:33.755574 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.497) 0:03:27.233 ******** 2026-04-04 00:55:33.755581 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755589 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.755597 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.755605 | orchestrator | 2026-04-04 00:55:33.755612 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-04 00:55:33.755620 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.297) 0:03:27.530 ******** 2026-04-04 00:55:33.755627 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755635 | orchestrator | 2026-04-04 00:55:33.755642 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-04 00:55:33.755650 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.187) 0:03:27.717 ******** 2026-04-04 00:55:33.755658 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755665 | orchestrator | 2026-04-04 00:55:33.755673 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-04 00:55:33.755681 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.199) 0:03:27.917 ******** 2026-04-04 00:55:33.755689 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755696 | orchestrator | 2026-04-04 00:55:33.755704 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-04 00:55:33.755711 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.096) 0:03:28.014 ******** 2026-04-04 00:55:33.755719 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755727 | orchestrator | 2026-04-04 00:55:33.755734 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-04 00:55:33.755742 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:00.185) 0:03:28.199 ******** 2026-04-04 00:55:33.755749 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755757 | orchestrator | 2026-04-04 00:55:33.755765 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-04 00:55:33.755773 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:00.171) 0:03:28.371 ******** 2026-04-04 00:55:33.755780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.755788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.755795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.755808 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755816 | orchestrator | 2026-04-04 00:55:33.755870 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-04 00:55:33.755906 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:00.346) 0:03:28.718 ******** 2026-04-04 00:55:33.755915 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755922 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.755930 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.755937 | orchestrator | 2026-04-04 00:55:33.755945 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-04 00:55:33.755952 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:00.390) 0:03:29.109 ******** 2026-04-04 00:55:33.755960 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755967 | orchestrator | 2026-04-04 00:55:33.755974 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-04 00:55:33.755982 | orchestrator | Saturday 04 April 2026 00:49:03 +0000 (0:00:00.168) 0:03:29.277 ******** 2026-04-04 00:55:33.755990 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.755998 | orchestrator | 2026-04-04 00:55:33.756005 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-04 00:55:33.756012 | orchestrator | Saturday 04 April 2026 00:49:03 +0000 (0:00:00.181) 0:03:29.459 ******** 2026-04-04 00:55:33.756019 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756026 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756034 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756041 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.756047 | orchestrator | 2026-04-04 00:55:33.756054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-04 00:55:33.756061 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:00.789) 0:03:30.249 ******** 2026-04-04 00:55:33.756069 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.756076 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.756084 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.756092 | orchestrator | 2026-04-04 00:55:33.756099 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-04 00:55:33.756106 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:00.407) 0:03:30.656 ******** 2026-04-04 00:55:33.756114 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.756122 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.756130 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.756137 | orchestrator | 2026-04-04 00:55:33.756145 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-04 00:55:33.756152 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.983) 0:03:31.640 ******** 2026-04-04 00:55:33.756160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.756167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.756175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.756181 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.756189 | orchestrator | 2026-04-04 00:55:33.756196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-04 00:55:33.756203 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.490) 0:03:32.131 ******** 2026-04-04 00:55:33.756211 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.756219 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.756233 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.756240 | orchestrator | 2026-04-04 00:55:33.756248 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-04 00:55:33.756255 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.299) 0:03:32.430 ******** 2026-04-04 00:55:33.756263 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756271 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756284 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756292 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.756300 | orchestrator | 2026-04-04 00:55:33.756307 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-04 00:55:33.756314 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.809) 0:03:33.240 ******** 2026-04-04 00:55:33.756322 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.756330 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.756338 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.756345 | orchestrator | 2026-04-04 00:55:33.756353 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-04 00:55:33.756360 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.260) 0:03:33.500 ******** 2026-04-04 00:55:33.756367 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.756373 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.756380 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.756387 | orchestrator | 2026-04-04 00:55:33.756393 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-04 00:55:33.756400 | orchestrator | Saturday 04 April 2026 00:49:08 +0000 (0:00:01.218) 0:03:34.718 ******** 2026-04-04 00:55:33.756407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.756414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.756421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.756428 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.756435 | orchestrator | 2026-04-04 00:55:33.756442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-04 00:55:33.756448 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:00.613) 0:03:35.332 ******** 2026-04-04 00:55:33.756455 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.756462 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.756469 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.756475 | orchestrator | 2026-04-04 00:55:33.756482 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-04 00:55:33.756489 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:00.302) 0:03:35.635 ******** 2026-04-04 00:55:33.756496 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.756503 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.756510 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.756517 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756524 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756560 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756567 | orchestrator | 2026-04-04 00:55:33.756574 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-04 00:55:33.756581 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:00.555) 0:03:36.191 ******** 2026-04-04 00:55:33.756588 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.756594 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.756601 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.756607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.756614 | orchestrator | 2026-04-04 00:55:33.756621 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-04 00:55:33.756629 | orchestrator | Saturday 04 April 2026 00:49:11 +0000 (0:00:01.154) 0:03:37.346 ******** 2026-04-04 00:55:33.756633 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.756637 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.756641 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.756645 | orchestrator | 2026-04-04 00:55:33.756650 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-04 00:55:33.756654 | orchestrator | Saturday 04 April 2026 00:49:11 +0000 (0:00:00.350) 0:03:37.696 ******** 2026-04-04 00:55:33.756663 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.756667 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.756671 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.756675 | orchestrator | 2026-04-04 00:55:33.756680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-04 00:55:33.756684 | orchestrator | Saturday 04 April 2026 00:49:12 +0000 (0:00:01.284) 0:03:38.981 ******** 2026-04-04 00:55:33.756688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:55:33.756692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:55:33.756696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:55:33.756700 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756704 | orchestrator | 2026-04-04 00:55:33.756709 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-04 00:55:33.756713 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:00.586) 0:03:39.568 ******** 2026-04-04 00:55:33.756717 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.756721 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.756725 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.756729 | orchestrator | 2026-04-04 00:55:33.756734 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-04 00:55:33.756738 | orchestrator | 2026-04-04 00:55:33.756742 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.756746 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:00.606) 0:03:40.174 ******** 2026-04-04 00:55:33.756753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.756760 | orchestrator | 2026-04-04 00:55:33.756767 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.756778 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:00.762) 0:03:40.936 ******** 2026-04-04 00:55:33.756785 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.756792 | orchestrator | 2026-04-04 00:55:33.756799 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.756806 | orchestrator | Saturday 04 April 2026 00:49:15 +0000 (0:00:00.680) 0:03:41.616 ******** 2026-04-04 00:55:33.756813 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.756838 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.756846 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.756853 | orchestrator | 2026-04-04 00:55:33.756860 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.756867 | orchestrator | Saturday 04 April 2026 00:49:16 +0000 (0:00:00.747) 0:03:42.364 ******** 2026-04-04 00:55:33.756874 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756881 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756888 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756895 | orchestrator | 2026-04-04 00:55:33.756902 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.756909 | orchestrator | Saturday 04 April 2026 00:49:16 +0000 (0:00:00.324) 0:03:42.688 ******** 2026-04-04 00:55:33.756916 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756923 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756929 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756934 | orchestrator | 2026-04-04 00:55:33.756938 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.756942 | orchestrator | Saturday 04 April 2026 00:49:17 +0000 (0:00:00.536) 0:03:43.225 ******** 2026-04-04 00:55:33.756951 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.756958 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.756965 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.756972 | orchestrator | 2026-04-04 00:55:33.756978 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.756991 | orchestrator | Saturday 04 April 2026 00:49:17 +0000 (0:00:00.332) 0:03:43.558 ******** 2026-04-04 00:55:33.756998 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757005 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757012 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757018 | orchestrator | 2026-04-04 00:55:33.757025 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.757032 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:00.899) 0:03:44.458 ******** 2026-04-04 00:55:33.757038 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757044 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757051 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757058 | orchestrator | 2026-04-04 00:55:33.757065 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.757072 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:00.310) 0:03:44.768 ******** 2026-04-04 00:55:33.757106 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757113 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757120 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757127 | orchestrator | 2026-04-04 00:55:33.757134 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.757141 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.576) 0:03:45.345 ******** 2026-04-04 00:55:33.757148 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757155 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757161 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757168 | orchestrator | 2026-04-04 00:55:33.757174 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.757179 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.728) 0:03:46.074 ******** 2026-04-04 00:55:33.757186 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757193 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757199 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757206 | orchestrator | 2026-04-04 00:55:33.757213 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.757220 | orchestrator | Saturday 04 April 2026 00:49:20 +0000 (0:00:00.689) 0:03:46.763 ******** 2026-04-04 00:55:33.757227 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757234 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757241 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757248 | orchestrator | 2026-04-04 00:55:33.757254 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.757261 | orchestrator | Saturday 04 April 2026 00:49:20 +0000 (0:00:00.267) 0:03:47.030 ******** 2026-04-04 00:55:33.757268 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757275 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757282 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757289 | orchestrator | 2026-04-04 00:55:33.757296 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.757303 | orchestrator | Saturday 04 April 2026 00:49:21 +0000 (0:00:00.430) 0:03:47.461 ******** 2026-04-04 00:55:33.757310 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757317 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757323 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757330 | orchestrator | 2026-04-04 00:55:33.757337 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.757344 | orchestrator | Saturday 04 April 2026 00:49:21 +0000 (0:00:00.275) 0:03:47.736 ******** 2026-04-04 00:55:33.757350 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757358 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757364 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757371 | orchestrator | 2026-04-04 00:55:33.757378 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.757385 | orchestrator | Saturday 04 April 2026 00:49:21 +0000 (0:00:00.295) 0:03:48.031 ******** 2026-04-04 00:55:33.757399 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757406 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757412 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757419 | orchestrator | 2026-04-04 00:55:33.757426 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.757437 | orchestrator | Saturday 04 April 2026 00:49:22 +0000 (0:00:00.272) 0:03:48.303 ******** 2026-04-04 00:55:33.757444 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757451 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757457 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757464 | orchestrator | 2026-04-04 00:55:33.757471 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.757478 | orchestrator | Saturday 04 April 2026 00:49:22 +0000 (0:00:00.406) 0:03:48.710 ******** 2026-04-04 00:55:33.757485 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757492 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.757499 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.757506 | orchestrator | 2026-04-04 00:55:33.757512 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.757519 | orchestrator | Saturday 04 April 2026 00:49:22 +0000 (0:00:00.253) 0:03:48.964 ******** 2026-04-04 00:55:33.757526 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757533 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757539 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757547 | orchestrator | 2026-04-04 00:55:33.757554 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.757561 | orchestrator | Saturday 04 April 2026 00:49:23 +0000 (0:00:00.282) 0:03:49.246 ******** 2026-04-04 00:55:33.757568 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757574 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757581 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757588 | orchestrator | 2026-04-04 00:55:33.757595 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.757602 | orchestrator | Saturday 04 April 2026 00:49:23 +0000 (0:00:00.281) 0:03:49.528 ******** 2026-04-04 00:55:33.757609 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757616 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757623 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757630 | orchestrator | 2026-04-04 00:55:33.757637 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:55:33.757644 | orchestrator | Saturday 04 April 2026 00:49:23 +0000 (0:00:00.598) 0:03:50.127 ******** 2026-04-04 00:55:33.757651 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757657 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757664 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757671 | orchestrator | 2026-04-04 00:55:33.757678 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-04 00:55:33.757685 | orchestrator | Saturday 04 April 2026 00:49:24 +0000 (0:00:00.318) 0:03:50.446 ******** 2026-04-04 00:55:33.757692 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.757699 | orchestrator | 2026-04-04 00:55:33.757705 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-04 00:55:33.757712 | orchestrator | Saturday 04 April 2026 00:49:24 +0000 (0:00:00.532) 0:03:50.978 ******** 2026-04-04 00:55:33.757718 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.757723 | orchestrator | 2026-04-04 00:55:33.757748 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-04 00:55:33.757753 | orchestrator | Saturday 04 April 2026 00:49:25 +0000 (0:00:00.422) 0:03:51.401 ******** 2026-04-04 00:55:33.757757 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:55:33.757762 | orchestrator | 2026-04-04 00:55:33.757766 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-04 00:55:33.757770 | orchestrator | Saturday 04 April 2026 00:49:26 +0000 (0:00:01.186) 0:03:52.587 ******** 2026-04-04 00:55:33.757782 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757787 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757791 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757795 | orchestrator | 2026-04-04 00:55:33.757799 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-04 00:55:33.757803 | orchestrator | Saturday 04 April 2026 00:49:26 +0000 (0:00:00.397) 0:03:52.985 ******** 2026-04-04 00:55:33.757807 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757811 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757815 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757859 | orchestrator | 2026-04-04 00:55:33.757864 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-04 00:55:33.757868 | orchestrator | Saturday 04 April 2026 00:49:27 +0000 (0:00:00.338) 0:03:53.323 ******** 2026-04-04 00:55:33.757872 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.757876 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.757881 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.757885 | orchestrator | 2026-04-04 00:55:33.757889 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-04 00:55:33.757893 | orchestrator | Saturday 04 April 2026 00:49:28 +0000 (0:00:01.014) 0:03:54.338 ******** 2026-04-04 00:55:33.757897 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.757902 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.757906 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.757910 | orchestrator | 2026-04-04 00:55:33.757914 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-04 00:55:33.757918 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.979) 0:03:55.318 ******** 2026-04-04 00:55:33.757923 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.757927 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.757931 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.757935 | orchestrator | 2026-04-04 00:55:33.757939 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-04 00:55:33.757944 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.771) 0:03:56.089 ******** 2026-04-04 00:55:33.757948 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.757952 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757956 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.757960 | orchestrator | 2026-04-04 00:55:33.757964 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-04 00:55:33.757968 | orchestrator | Saturday 04 April 2026 00:49:30 +0000 (0:00:00.763) 0:03:56.853 ******** 2026-04-04 00:55:33.757972 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.757977 | orchestrator | 2026-04-04 00:55:33.757981 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-04 00:55:33.757989 | orchestrator | Saturday 04 April 2026 00:49:31 +0000 (0:00:01.220) 0:03:58.074 ******** 2026-04-04 00:55:33.757993 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.757997 | orchestrator | 2026-04-04 00:55:33.758001 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-04 00:55:33.758005 | orchestrator | Saturday 04 April 2026 00:49:32 +0000 (0:00:00.740) 0:03:58.814 ******** 2026-04-04 00:55:33.758010 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.758055 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.758059 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.758063 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:33.758067 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-04 00:55:33.758072 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:33.758076 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:33.758080 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-04 00:55:33.758088 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:33.758092 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-04 00:55:33.758096 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-04 00:55:33.758100 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-04 00:55:33.758105 | orchestrator | 2026-04-04 00:55:33.758109 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-04 00:55:33.758113 | orchestrator | Saturday 04 April 2026 00:49:36 +0000 (0:00:03.811) 0:04:02.626 ******** 2026-04-04 00:55:33.758117 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758121 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758125 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758129 | orchestrator | 2026-04-04 00:55:33.758133 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-04 00:55:33.758137 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:01.338) 0:04:03.964 ******** 2026-04-04 00:55:33.758142 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758146 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758150 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758154 | orchestrator | 2026-04-04 00:55:33.758158 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-04 00:55:33.758162 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:00.239) 0:04:04.204 ******** 2026-04-04 00:55:33.758166 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758170 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758174 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758178 | orchestrator | 2026-04-04 00:55:33.758182 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-04 00:55:33.758187 | orchestrator | Saturday 04 April 2026 00:49:38 +0000 (0:00:00.260) 0:04:04.465 ******** 2026-04-04 00:55:33.758191 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758213 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758218 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758222 | orchestrator | 2026-04-04 00:55:33.758226 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-04 00:55:33.758230 | orchestrator | Saturday 04 April 2026 00:49:39 +0000 (0:00:01.723) 0:04:06.188 ******** 2026-04-04 00:55:33.758234 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758241 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758248 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758255 | orchestrator | 2026-04-04 00:55:33.758262 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-04 00:55:33.758269 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:01.563) 0:04:07.752 ******** 2026-04-04 00:55:33.758277 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758283 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758290 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758297 | orchestrator | 2026-04-04 00:55:33.758304 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-04 00:55:33.758311 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:00.391) 0:04:08.143 ******** 2026-04-04 00:55:33.758318 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758325 | orchestrator | 2026-04-04 00:55:33.758332 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-04 00:55:33.758340 | orchestrator | Saturday 04 April 2026 00:49:42 +0000 (0:00:00.454) 0:04:08.597 ******** 2026-04-04 00:55:33.758347 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758354 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758361 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758368 | orchestrator | 2026-04-04 00:55:33.758375 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-04 00:55:33.758383 | orchestrator | Saturday 04 April 2026 00:49:42 +0000 (0:00:00.399) 0:04:08.996 ******** 2026-04-04 00:55:33.758394 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758398 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758402 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758406 | orchestrator | 2026-04-04 00:55:33.758410 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-04 00:55:33.758415 | orchestrator | Saturday 04 April 2026 00:49:43 +0000 (0:00:00.259) 0:04:09.255 ******** 2026-04-04 00:55:33.758419 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758423 | orchestrator | 2026-04-04 00:55:33.758427 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-04 00:55:33.758432 | orchestrator | Saturday 04 April 2026 00:49:43 +0000 (0:00:00.412) 0:04:09.667 ******** 2026-04-04 00:55:33.758436 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758440 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758444 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758448 | orchestrator | 2026-04-04 00:55:33.758452 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-04 00:55:33.758460 | orchestrator | Saturday 04 April 2026 00:49:45 +0000 (0:00:01.789) 0:04:11.456 ******** 2026-04-04 00:55:33.758465 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758469 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758473 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758477 | orchestrator | 2026-04-04 00:55:33.758481 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-04 00:55:33.758486 | orchestrator | Saturday 04 April 2026 00:49:47 +0000 (0:00:02.217) 0:04:13.674 ******** 2026-04-04 00:55:33.758490 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758494 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758498 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758502 | orchestrator | 2026-04-04 00:55:33.758506 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-04 00:55:33.758511 | orchestrator | Saturday 04 April 2026 00:49:49 +0000 (0:00:02.141) 0:04:15.816 ******** 2026-04-04 00:55:33.758515 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.758519 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.758523 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.758527 | orchestrator | 2026-04-04 00:55:33.758531 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-04 00:55:33.758536 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:02.108) 0:04:17.924 ******** 2026-04-04 00:55:33.758540 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758544 | orchestrator | 2026-04-04 00:55:33.758548 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-04 00:55:33.758553 | orchestrator | Saturday 04 April 2026 00:49:52 +0000 (0:00:00.586) 0:04:18.511 ******** 2026-04-04 00:55:33.758557 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-04 00:55:33.758561 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758565 | orchestrator | 2026-04-04 00:55:33.758569 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-04 00:55:33.758573 | orchestrator | Saturday 04 April 2026 00:50:13 +0000 (0:00:21.454) 0:04:39.966 ******** 2026-04-04 00:55:33.758578 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758582 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758586 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758590 | orchestrator | 2026-04-04 00:55:33.758594 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-04 00:55:33.758598 | orchestrator | Saturday 04 April 2026 00:50:19 +0000 (0:00:05.873) 0:04:45.839 ******** 2026-04-04 00:55:33.758602 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758607 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758611 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758621 | orchestrator | 2026-04-04 00:55:33.758625 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-04 00:55:33.758649 | orchestrator | Saturday 04 April 2026 00:50:19 +0000 (0:00:00.299) 0:04:46.138 ******** 2026-04-04 00:55:33.758656 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-04 00:55:33.758661 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-04 00:55:33.758667 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-04 00:55:33.758673 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-04 00:55:33.758677 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-04 00:55:33.758684 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54ebf5f35dfdee9fcc6212e7b1bc940359d399ce'}])  2026-04-04 00:55:33.758690 | orchestrator | 2026-04-04 00:55:33.758694 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.758698 | orchestrator | Saturday 04 April 2026 00:50:29 +0000 (0:00:09.643) 0:04:55.781 ******** 2026-04-04 00:55:33.758702 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758706 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758710 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758714 | orchestrator | 2026-04-04 00:55:33.758719 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-04 00:55:33.758723 | orchestrator | Saturday 04 April 2026 00:50:29 +0000 (0:00:00.356) 0:04:56.138 ******** 2026-04-04 00:55:33.758727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758731 | orchestrator | 2026-04-04 00:55:33.758735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-04 00:55:33.758739 | orchestrator | Saturday 04 April 2026 00:50:30 +0000 (0:00:00.529) 0:04:56.667 ******** 2026-04-04 00:55:33.758743 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758747 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758751 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758759 | orchestrator | 2026-04-04 00:55:33.758763 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-04 00:55:33.758767 | orchestrator | Saturday 04 April 2026 00:50:31 +0000 (0:00:00.625) 0:04:57.292 ******** 2026-04-04 00:55:33.758771 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758775 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758779 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758783 | orchestrator | 2026-04-04 00:55:33.758787 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-04 00:55:33.758791 | orchestrator | Saturday 04 April 2026 00:50:31 +0000 (0:00:00.333) 0:04:57.626 ******** 2026-04-04 00:55:33.758796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:55:33.758800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:55:33.758804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:55:33.758808 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758812 | orchestrator | 2026-04-04 00:55:33.758816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-04 00:55:33.758837 | orchestrator | Saturday 04 April 2026 00:50:32 +0000 (0:00:00.632) 0:04:58.258 ******** 2026-04-04 00:55:33.758841 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758845 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758866 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758871 | orchestrator | 2026-04-04 00:55:33.758875 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-04 00:55:33.758879 | orchestrator | 2026-04-04 00:55:33.758883 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.758887 | orchestrator | Saturday 04 April 2026 00:50:32 +0000 (0:00:00.781) 0:04:59.039 ******** 2026-04-04 00:55:33.758892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758896 | orchestrator | 2026-04-04 00:55:33.758900 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.758904 | orchestrator | Saturday 04 April 2026 00:50:33 +0000 (0:00:00.503) 0:04:59.542 ******** 2026-04-04 00:55:33.758908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.758913 | orchestrator | 2026-04-04 00:55:33.758917 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.758921 | orchestrator | Saturday 04 April 2026 00:50:33 +0000 (0:00:00.495) 0:05:00.038 ******** 2026-04-04 00:55:33.758925 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.758929 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.758933 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.758937 | orchestrator | 2026-04-04 00:55:33.758941 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.758945 | orchestrator | Saturday 04 April 2026 00:50:34 +0000 (0:00:01.031) 0:05:01.069 ******** 2026-04-04 00:55:33.758949 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758953 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758958 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758962 | orchestrator | 2026-04-04 00:55:33.758966 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.758970 | orchestrator | Saturday 04 April 2026 00:50:35 +0000 (0:00:00.325) 0:05:01.395 ******** 2026-04-04 00:55:33.758974 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.758978 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.758982 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.758986 | orchestrator | 2026-04-04 00:55:33.758990 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.758994 | orchestrator | Saturday 04 April 2026 00:50:35 +0000 (0:00:00.309) 0:05:01.704 ******** 2026-04-04 00:55:33.758998 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759006 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759010 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759014 | orchestrator | 2026-04-04 00:55:33.759018 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.759022 | orchestrator | Saturday 04 April 2026 00:50:35 +0000 (0:00:00.278) 0:05:01.983 ******** 2026-04-04 00:55:33.759026 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759030 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759034 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759039 | orchestrator | 2026-04-04 00:55:33.759045 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.759050 | orchestrator | Saturday 04 April 2026 00:50:36 +0000 (0:00:01.196) 0:05:03.180 ******** 2026-04-04 00:55:33.759054 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759058 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759062 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759066 | orchestrator | 2026-04-04 00:55:33.759070 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.759074 | orchestrator | Saturday 04 April 2026 00:50:37 +0000 (0:00:00.311) 0:05:03.492 ******** 2026-04-04 00:55:33.759078 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759082 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759086 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759090 | orchestrator | 2026-04-04 00:55:33.759095 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.759099 | orchestrator | Saturday 04 April 2026 00:50:37 +0000 (0:00:00.280) 0:05:03.772 ******** 2026-04-04 00:55:33.759103 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759107 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759111 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759115 | orchestrator | 2026-04-04 00:55:33.759119 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.759123 | orchestrator | Saturday 04 April 2026 00:50:38 +0000 (0:00:00.766) 0:05:04.539 ******** 2026-04-04 00:55:33.759127 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759133 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759140 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759146 | orchestrator | 2026-04-04 00:55:33.759157 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.759167 | orchestrator | Saturday 04 April 2026 00:50:39 +0000 (0:00:01.019) 0:05:05.558 ******** 2026-04-04 00:55:33.759173 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759179 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759185 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759192 | orchestrator | 2026-04-04 00:55:33.759198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.759205 | orchestrator | Saturday 04 April 2026 00:50:39 +0000 (0:00:00.484) 0:05:06.042 ******** 2026-04-04 00:55:33.759212 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759219 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759225 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759232 | orchestrator | 2026-04-04 00:55:33.759238 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.759245 | orchestrator | Saturday 04 April 2026 00:50:40 +0000 (0:00:00.701) 0:05:06.744 ******** 2026-04-04 00:55:33.759251 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759258 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759265 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759271 | orchestrator | 2026-04-04 00:55:33.759279 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.759310 | orchestrator | Saturday 04 April 2026 00:50:40 +0000 (0:00:00.307) 0:05:07.052 ******** 2026-04-04 00:55:33.759316 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759320 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759324 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759333 | orchestrator | 2026-04-04 00:55:33.759338 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.759342 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:00.526) 0:05:07.578 ******** 2026-04-04 00:55:33.759346 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759350 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759354 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759358 | orchestrator | 2026-04-04 00:55:33.759362 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.759366 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:00.288) 0:05:07.867 ******** 2026-04-04 00:55:33.759370 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759374 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759378 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759382 | orchestrator | 2026-04-04 00:55:33.759387 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.759391 | orchestrator | Saturday 04 April 2026 00:50:42 +0000 (0:00:00.357) 0:05:08.225 ******** 2026-04-04 00:55:33.759395 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759399 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759403 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759407 | orchestrator | 2026-04-04 00:55:33.759411 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.759415 | orchestrator | Saturday 04 April 2026 00:50:42 +0000 (0:00:00.371) 0:05:08.596 ******** 2026-04-04 00:55:33.759419 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759423 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759427 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759431 | orchestrator | 2026-04-04 00:55:33.759435 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.759439 | orchestrator | Saturday 04 April 2026 00:50:42 +0000 (0:00:00.332) 0:05:08.929 ******** 2026-04-04 00:55:33.759443 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759448 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759452 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759456 | orchestrator | 2026-04-04 00:55:33.759460 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.759464 | orchestrator | Saturday 04 April 2026 00:50:43 +0000 (0:00:00.566) 0:05:09.495 ******** 2026-04-04 00:55:33.759468 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759472 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759476 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759480 | orchestrator | 2026-04-04 00:55:33.759484 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:55:33.759488 | orchestrator | Saturday 04 April 2026 00:50:43 +0000 (0:00:00.507) 0:05:10.003 ******** 2026-04-04 00:55:33.759492 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:55:33.759496 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.759504 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.759508 | orchestrator | 2026-04-04 00:55:33.759513 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-04 00:55:33.759517 | orchestrator | Saturday 04 April 2026 00:50:44 +0000 (0:00:00.855) 0:05:10.858 ******** 2026-04-04 00:55:33.759521 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.759525 | orchestrator | 2026-04-04 00:55:33.759529 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-04 00:55:33.759535 | orchestrator | Saturday 04 April 2026 00:50:45 +0000 (0:00:01.016) 0:05:11.875 ******** 2026-04-04 00:55:33.759542 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.759552 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.759560 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.759572 | orchestrator | 2026-04-04 00:55:33.759578 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-04 00:55:33.759585 | orchestrator | Saturday 04 April 2026 00:50:46 +0000 (0:00:00.697) 0:05:12.572 ******** 2026-04-04 00:55:33.759591 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759597 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759603 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759609 | orchestrator | 2026-04-04 00:55:33.759616 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-04 00:55:33.759622 | orchestrator | Saturday 04 April 2026 00:50:46 +0000 (0:00:00.311) 0:05:12.884 ******** 2026-04-04 00:55:33.759629 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.759636 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.759642 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.759648 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-04 00:55:33.759654 | orchestrator | 2026-04-04 00:55:33.759661 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-04 00:55:33.759667 | orchestrator | Saturday 04 April 2026 00:50:54 +0000 (0:00:07.734) 0:05:20.618 ******** 2026-04-04 00:55:33.759674 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759680 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759686 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759693 | orchestrator | 2026-04-04 00:55:33.759700 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-04 00:55:33.759706 | orchestrator | Saturday 04 April 2026 00:50:54 +0000 (0:00:00.522) 0:05:21.142 ******** 2026-04-04 00:55:33.759713 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 00:55:33.759721 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 00:55:33.759727 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 00:55:33.759735 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.759742 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.759779 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.759787 | orchestrator | 2026-04-04 00:55:33.759793 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:55:33.759800 | orchestrator | Saturday 04 April 2026 00:50:56 +0000 (0:00:01.921) 0:05:23.063 ******** 2026-04-04 00:55:33.759806 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 00:55:33.759813 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 00:55:33.759836 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 00:55:33.759841 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:55:33.759845 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-04 00:55:33.759849 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-04 00:55:33.759853 | orchestrator | 2026-04-04 00:55:33.759858 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-04 00:55:33.759862 | orchestrator | Saturday 04 April 2026 00:50:58 +0000 (0:00:01.480) 0:05:24.543 ******** 2026-04-04 00:55:33.759867 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.759871 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.759875 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.759879 | orchestrator | 2026-04-04 00:55:33.759883 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-04 00:55:33.759887 | orchestrator | Saturday 04 April 2026 00:50:59 +0000 (0:00:01.120) 0:05:25.663 ******** 2026-04-04 00:55:33.759891 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759896 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759900 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759904 | orchestrator | 2026-04-04 00:55:33.759908 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-04 00:55:33.759913 | orchestrator | Saturday 04 April 2026 00:50:59 +0000 (0:00:00.224) 0:05:25.888 ******** 2026-04-04 00:55:33.759924 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759928 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759932 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759936 | orchestrator | 2026-04-04 00:55:33.759940 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-04 00:55:33.759944 | orchestrator | Saturday 04 April 2026 00:51:00 +0000 (0:00:00.404) 0:05:26.293 ******** 2026-04-04 00:55:33.759949 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.759953 | orchestrator | 2026-04-04 00:55:33.759957 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-04 00:55:33.759961 | orchestrator | Saturday 04 April 2026 00:51:00 +0000 (0:00:00.509) 0:05:26.803 ******** 2026-04-04 00:55:33.759965 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.759969 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.759973 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759978 | orchestrator | 2026-04-04 00:55:33.759982 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-04 00:55:33.759986 | orchestrator | Saturday 04 April 2026 00:51:00 +0000 (0:00:00.302) 0:05:27.105 ******** 2026-04-04 00:55:33.759990 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.759998 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.760002 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.760006 | orchestrator | 2026-04-04 00:55:33.760010 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-04 00:55:33.760015 | orchestrator | Saturday 04 April 2026 00:51:01 +0000 (0:00:00.423) 0:05:27.529 ******** 2026-04-04 00:55:33.760019 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.760024 | orchestrator | 2026-04-04 00:55:33.760028 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-04 00:55:33.760032 | orchestrator | Saturday 04 April 2026 00:51:01 +0000 (0:00:00.366) 0:05:27.896 ******** 2026-04-04 00:55:33.760036 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760040 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760044 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760048 | orchestrator | 2026-04-04 00:55:33.760052 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-04 00:55:33.760056 | orchestrator | Saturday 04 April 2026 00:51:02 +0000 (0:00:01.133) 0:05:29.030 ******** 2026-04-04 00:55:33.760060 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760065 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760069 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760073 | orchestrator | 2026-04-04 00:55:33.760077 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-04 00:55:33.760081 | orchestrator | Saturday 04 April 2026 00:51:04 +0000 (0:00:01.269) 0:05:30.299 ******** 2026-04-04 00:55:33.760085 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760089 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760093 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760098 | orchestrator | 2026-04-04 00:55:33.760102 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-04 00:55:33.760106 | orchestrator | Saturday 04 April 2026 00:51:06 +0000 (0:00:02.097) 0:05:32.397 ******** 2026-04-04 00:55:33.760110 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760114 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760118 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760122 | orchestrator | 2026-04-04 00:55:33.760126 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-04 00:55:33.760130 | orchestrator | Saturday 04 April 2026 00:51:08 +0000 (0:00:01.888) 0:05:34.285 ******** 2026-04-04 00:55:33.760135 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.760141 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.760157 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-04 00:55:33.760164 | orchestrator | 2026-04-04 00:55:33.760171 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-04 00:55:33.760177 | orchestrator | Saturday 04 April 2026 00:51:08 +0000 (0:00:00.279) 0:05:34.565 ******** 2026-04-04 00:55:33.760206 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-04 00:55:33.760214 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-04 00:55:33.760221 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.760228 | orchestrator | 2026-04-04 00:55:33.760235 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-04 00:55:33.760241 | orchestrator | Saturday 04 April 2026 00:51:21 +0000 (0:00:13.251) 0:05:47.816 ******** 2026-04-04 00:55:33.760247 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.760252 | orchestrator | 2026-04-04 00:55:33.760256 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-04 00:55:33.760260 | orchestrator | Saturday 04 April 2026 00:51:22 +0000 (0:00:01.275) 0:05:49.091 ******** 2026-04-04 00:55:33.760264 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.760268 | orchestrator | 2026-04-04 00:55:33.760272 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-04 00:55:33.760276 | orchestrator | Saturday 04 April 2026 00:51:23 +0000 (0:00:00.284) 0:05:49.376 ******** 2026-04-04 00:55:33.760280 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.760284 | orchestrator | 2026-04-04 00:55:33.760289 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-04 00:55:33.760293 | orchestrator | Saturday 04 April 2026 00:51:23 +0000 (0:00:00.096) 0:05:49.473 ******** 2026-04-04 00:55:33.760297 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-04 00:55:33.760301 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-04 00:55:33.760305 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-04 00:55:33.760309 | orchestrator | 2026-04-04 00:55:33.760313 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-04 00:55:33.760317 | orchestrator | Saturday 04 April 2026 00:51:29 +0000 (0:00:05.929) 0:05:55.402 ******** 2026-04-04 00:55:33.760321 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-04 00:55:33.760325 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-04 00:55:33.760329 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-04 00:55:33.760333 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-04 00:55:33.760337 | orchestrator | 2026-04-04 00:55:33.760341 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.760346 | orchestrator | Saturday 04 April 2026 00:51:33 +0000 (0:00:04.545) 0:05:59.948 ******** 2026-04-04 00:55:33.760350 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760354 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760358 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760362 | orchestrator | 2026-04-04 00:55:33.760366 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-04 00:55:33.760373 | orchestrator | Saturday 04 April 2026 00:51:34 +0000 (0:00:00.805) 0:06:00.753 ******** 2026-04-04 00:55:33.760378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.760382 | orchestrator | 2026-04-04 00:55:33.760386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-04 00:55:33.760390 | orchestrator | Saturday 04 April 2026 00:51:34 +0000 (0:00:00.462) 0:06:01.216 ******** 2026-04-04 00:55:33.760398 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.760402 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.760406 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.760410 | orchestrator | 2026-04-04 00:55:33.760414 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-04 00:55:33.760418 | orchestrator | Saturday 04 April 2026 00:51:35 +0000 (0:00:00.284) 0:06:01.501 ******** 2026-04-04 00:55:33.760422 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.760426 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.760430 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.760434 | orchestrator | 2026-04-04 00:55:33.760439 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-04 00:55:33.760443 | orchestrator | Saturday 04 April 2026 00:51:36 +0000 (0:00:01.366) 0:06:02.868 ******** 2026-04-04 00:55:33.760447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:55:33.760451 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:55:33.760455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:55:33.760459 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.760463 | orchestrator | 2026-04-04 00:55:33.760467 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-04 00:55:33.760474 | orchestrator | Saturday 04 April 2026 00:51:37 +0000 (0:00:00.552) 0:06:03.420 ******** 2026-04-04 00:55:33.760483 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.760492 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.760498 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.760505 | orchestrator | 2026-04-04 00:55:33.760511 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-04 00:55:33.760517 | orchestrator | 2026-04-04 00:55:33.760523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.760530 | orchestrator | Saturday 04 April 2026 00:51:37 +0000 (0:00:00.475) 0:06:03.896 ******** 2026-04-04 00:55:33.760536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.760543 | orchestrator | 2026-04-04 00:55:33.760550 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.760556 | orchestrator | Saturday 04 April 2026 00:51:38 +0000 (0:00:00.552) 0:06:04.448 ******** 2026-04-04 00:55:33.760585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.760590 | orchestrator | 2026-04-04 00:55:33.760595 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.760599 | orchestrator | Saturday 04 April 2026 00:51:38 +0000 (0:00:00.440) 0:06:04.889 ******** 2026-04-04 00:55:33.760603 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.760607 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.760611 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.760616 | orchestrator | 2026-04-04 00:55:33.760620 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.760624 | orchestrator | Saturday 04 April 2026 00:51:38 +0000 (0:00:00.282) 0:06:05.171 ******** 2026-04-04 00:55:33.760628 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.760632 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.760636 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.760640 | orchestrator | 2026-04-04 00:55:33.760644 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.760649 | orchestrator | Saturday 04 April 2026 00:51:40 +0000 (0:00:01.113) 0:06:06.284 ******** 2026-04-04 00:55:33.760653 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.760657 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.760661 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.760665 | orchestrator | 2026-04-04 00:55:33.760669 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.760687 | orchestrator | Saturday 04 April 2026 00:51:40 +0000 (0:00:00.910) 0:06:07.194 ******** 2026-04-04 00:55:33.760691 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.760695 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.760699 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.760703 | orchestrator | 2026-04-04 00:55:33.760707 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.760712 | orchestrator | Saturday 04 April 2026 00:51:41 +0000 (0:00:00.874) 0:06:08.068 ******** 2026-04-04 00:55:33.760716 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.760720 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.760724 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.760728 | orchestrator | 2026-04-04 00:55:33.760732 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.760736 | orchestrator | Saturday 04 April 2026 00:51:42 +0000 (0:00:00.230) 0:06:08.299 ******** 2026-04-04 00:55:33.760740 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.760744 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.760750 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.760757 | orchestrator | 2026-04-04 00:55:33.760768 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.760775 | orchestrator | Saturday 04 April 2026 00:51:42 +0000 (0:00:00.452) 0:06:08.752 ******** 2026-04-04 00:55:33.760781 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.760788 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.760795 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.760802 | orchestrator | 2026-04-04 00:55:33.760809 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.760816 | orchestrator | Saturday 04 April 2026 00:51:42 +0000 (0:00:00.255) 0:06:09.008 ******** 2026-04-04 00:55:33.760979 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.760987 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.760991 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.760996 | orchestrator | 2026-04-04 00:55:33.761000 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.761004 | orchestrator | Saturday 04 April 2026 00:51:43 +0000 (0:00:00.716) 0:06:09.724 ******** 2026-04-04 00:55:33.761008 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761012 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761016 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761020 | orchestrator | 2026-04-04 00:55:33.761024 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.761028 | orchestrator | Saturday 04 April 2026 00:51:44 +0000 (0:00:00.763) 0:06:10.488 ******** 2026-04-04 00:55:33.761033 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761037 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761041 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761045 | orchestrator | 2026-04-04 00:55:33.761049 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.761052 | orchestrator | Saturday 04 April 2026 00:51:44 +0000 (0:00:00.369) 0:06:10.858 ******** 2026-04-04 00:55:33.761056 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761060 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761064 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761067 | orchestrator | 2026-04-04 00:55:33.761071 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.761075 | orchestrator | Saturday 04 April 2026 00:51:44 +0000 (0:00:00.201) 0:06:11.059 ******** 2026-04-04 00:55:33.761079 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761082 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761086 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761090 | orchestrator | 2026-04-04 00:55:33.761093 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.761097 | orchestrator | Saturday 04 April 2026 00:51:45 +0000 (0:00:00.302) 0:06:11.361 ******** 2026-04-04 00:55:33.761105 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761109 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761113 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761117 | orchestrator | 2026-04-04 00:55:33.761120 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.761124 | orchestrator | Saturday 04 April 2026 00:51:45 +0000 (0:00:00.332) 0:06:11.693 ******** 2026-04-04 00:55:33.761128 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761132 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761135 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761139 | orchestrator | 2026-04-04 00:55:33.761143 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.761147 | orchestrator | Saturday 04 April 2026 00:51:45 +0000 (0:00:00.411) 0:06:12.105 ******** 2026-04-04 00:55:33.761150 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761154 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761158 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761162 | orchestrator | 2026-04-04 00:55:33.761171 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.761175 | orchestrator | Saturday 04 April 2026 00:51:46 +0000 (0:00:00.328) 0:06:12.433 ******** 2026-04-04 00:55:33.761178 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761182 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761186 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761190 | orchestrator | 2026-04-04 00:55:33.761193 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.761197 | orchestrator | Saturday 04 April 2026 00:51:46 +0000 (0:00:00.238) 0:06:12.672 ******** 2026-04-04 00:55:33.761201 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761205 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761208 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761212 | orchestrator | 2026-04-04 00:55:33.761216 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.761220 | orchestrator | Saturday 04 April 2026 00:51:46 +0000 (0:00:00.288) 0:06:12.961 ******** 2026-04-04 00:55:33.761223 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761227 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761231 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761234 | orchestrator | 2026-04-04 00:55:33.761246 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.761250 | orchestrator | Saturday 04 April 2026 00:51:47 +0000 (0:00:00.466) 0:06:13.427 ******** 2026-04-04 00:55:33.761253 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761262 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761266 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761270 | orchestrator | 2026-04-04 00:55:33.761273 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-04 00:55:33.761277 | orchestrator | Saturday 04 April 2026 00:51:47 +0000 (0:00:00.445) 0:06:13.873 ******** 2026-04-04 00:55:33.761281 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761285 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761288 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761292 | orchestrator | 2026-04-04 00:55:33.761296 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:55:33.761300 | orchestrator | Saturday 04 April 2026 00:51:47 +0000 (0:00:00.263) 0:06:14.136 ******** 2026-04-04 00:55:33.761303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:33.761307 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:33.761311 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:33.761315 | orchestrator | 2026-04-04 00:55:33.761318 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-04 00:55:33.761322 | orchestrator | Saturday 04 April 2026 00:51:48 +0000 (0:00:00.727) 0:06:14.864 ******** 2026-04-04 00:55:33.761329 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.761333 | orchestrator | 2026-04-04 00:55:33.761336 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-04 00:55:33.761343 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:00.511) 0:06:15.375 ******** 2026-04-04 00:55:33.761346 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761350 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761354 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761358 | orchestrator | 2026-04-04 00:55:33.761361 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-04 00:55:33.761365 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:00.238) 0:06:15.613 ******** 2026-04-04 00:55:33.761369 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761373 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761376 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761380 | orchestrator | 2026-04-04 00:55:33.761384 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-04 00:55:33.761387 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:00.249) 0:06:15.862 ******** 2026-04-04 00:55:33.761391 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761395 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761399 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761402 | orchestrator | 2026-04-04 00:55:33.761406 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-04 00:55:33.761410 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:00.902) 0:06:16.764 ******** 2026-04-04 00:55:33.761414 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761417 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761421 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761425 | orchestrator | 2026-04-04 00:55:33.761428 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-04 00:55:33.761432 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:00.302) 0:06:17.067 ******** 2026-04-04 00:55:33.761436 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:55:33.761440 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:55:33.761444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:55:33.761447 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:55:33.761451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:55:33.761455 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:55:33.761459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:55:33.761462 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:55:33.761472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:55:33.761476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:55:33.761480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:55:33.761483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:55:33.761487 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:55:33.761491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:55:33.761495 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:55:33.761498 | orchestrator | 2026-04-04 00:55:33.761502 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-04 00:55:33.761509 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:04.211) 0:06:21.278 ******** 2026-04-04 00:55:33.761513 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761517 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761520 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761524 | orchestrator | 2026-04-04 00:55:33.761528 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-04 00:55:33.761532 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.274) 0:06:21.553 ******** 2026-04-04 00:55:33.761535 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.761539 | orchestrator | 2026-04-04 00:55:33.761543 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-04 00:55:33.761547 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.539) 0:06:22.092 ******** 2026-04-04 00:55:33.761550 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:55:33.761554 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:55:33.761558 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:55:33.761562 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-04 00:55:33.761565 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-04 00:55:33.761569 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-04 00:55:33.761573 | orchestrator | 2026-04-04 00:55:33.761577 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-04 00:55:33.761580 | orchestrator | Saturday 04 April 2026 00:51:56 +0000 (0:00:01.072) 0:06:23.165 ******** 2026-04-04 00:55:33.761584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.761588 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.761592 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.761595 | orchestrator | 2026-04-04 00:55:33.761601 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:55:33.761605 | orchestrator | Saturday 04 April 2026 00:51:58 +0000 (0:00:01.788) 0:06:24.954 ******** 2026-04-04 00:55:33.761609 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:55:33.761613 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.761616 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.761620 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:55:33.761624 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:55:33.761628 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.761631 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:55:33.761635 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:55:33.761639 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.761642 | orchestrator | 2026-04-04 00:55:33.761646 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-04 00:55:33.761650 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:01.299) 0:06:26.254 ******** 2026-04-04 00:55:33.761654 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.761657 | orchestrator | 2026-04-04 00:55:33.761661 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-04 00:55:33.761665 | orchestrator | Saturday 04 April 2026 00:52:01 +0000 (0:00:01.931) 0:06:28.186 ******** 2026-04-04 00:55:33.761669 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.761672 | orchestrator | 2026-04-04 00:55:33.761676 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-04 00:55:33.761680 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:00.557) 0:06:28.743 ******** 2026-04-04 00:55:33.761687 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b1fc2ad7-1445-5918-af09-c59800dad69a', 'data_vg': 'ceph-b1fc2ad7-1445-5918-af09-c59800dad69a'}) 2026-04-04 00:55:33.761692 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a8cb98ca-1bad-517a-917a-7c952ebb91ae', 'data_vg': 'ceph-a8cb98ca-1bad-517a-917a-7c952ebb91ae'}) 2026-04-04 00:55:33.761696 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3', 'data_vg': 'ceph-7fdc24e9-a76c-5276-a9f5-2fea7f78f0c3'}) 2026-04-04 00:55:33.761700 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8b2f720-8689-5378-93a8-1716210ee10b', 'data_vg': 'ceph-f8b2f720-8689-5378-93a8-1716210ee10b'}) 2026-04-04 00:55:33.761706 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6', 'data_vg': 'ceph-0b8e88b0-25e2-5e5e-a9b3-eb58a1775db6'}) 2026-04-04 00:55:33.761710 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf', 'data_vg': 'ceph-ecc56a61-ea8b-515f-be54-1cf9bb6e81cf'}) 2026-04-04 00:55:33.761714 | orchestrator | 2026-04-04 00:55:33.761718 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-04 00:55:33.761722 | orchestrator | Saturday 04 April 2026 00:52:39 +0000 (0:00:36.558) 0:07:05.302 ******** 2026-04-04 00:55:33.761725 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761729 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761733 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761737 | orchestrator | 2026-04-04 00:55:33.761740 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-04 00:55:33.761744 | orchestrator | Saturday 04 April 2026 00:52:39 +0000 (0:00:00.450) 0:07:05.752 ******** 2026-04-04 00:55:33.761748 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.761751 | orchestrator | 2026-04-04 00:55:33.761755 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-04 00:55:33.761759 | orchestrator | Saturday 04 April 2026 00:52:39 +0000 (0:00:00.462) 0:07:06.214 ******** 2026-04-04 00:55:33.761763 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761766 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761770 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761774 | orchestrator | 2026-04-04 00:55:33.761778 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-04 00:55:33.761781 | orchestrator | Saturday 04 April 2026 00:52:40 +0000 (0:00:00.655) 0:07:06.870 ******** 2026-04-04 00:55:33.761785 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.761789 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.761793 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.761796 | orchestrator | 2026-04-04 00:55:33.761800 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-04 00:55:33.761804 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:01.626) 0:07:08.496 ******** 2026-04-04 00:55:33.761808 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.761811 | orchestrator | 2026-04-04 00:55:33.761815 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-04 00:55:33.761832 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:00.446) 0:07:08.943 ******** 2026-04-04 00:55:33.761838 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.761844 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.761851 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.761856 | orchestrator | 2026-04-04 00:55:33.761864 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-04 00:55:33.761868 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:01.259) 0:07:10.202 ******** 2026-04-04 00:55:33.761872 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.761875 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.761879 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.761886 | orchestrator | 2026-04-04 00:55:33.761892 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-04 00:55:33.761896 | orchestrator | Saturday 04 April 2026 00:52:45 +0000 (0:00:01.179) 0:07:11.382 ******** 2026-04-04 00:55:33.761900 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.761904 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.761907 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.761911 | orchestrator | 2026-04-04 00:55:33.761915 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-04 00:55:33.761918 | orchestrator | Saturday 04 April 2026 00:52:46 +0000 (0:00:01.833) 0:07:13.215 ******** 2026-04-04 00:55:33.761922 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761926 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761930 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761933 | orchestrator | 2026-04-04 00:55:33.761937 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-04 00:55:33.761941 | orchestrator | Saturday 04 April 2026 00:52:47 +0000 (0:00:00.282) 0:07:13.498 ******** 2026-04-04 00:55:33.761944 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.761948 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.761952 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.761956 | orchestrator | 2026-04-04 00:55:33.761959 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-04 00:55:33.761963 | orchestrator | Saturday 04 April 2026 00:52:47 +0000 (0:00:00.259) 0:07:13.758 ******** 2026-04-04 00:55:33.761967 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-04 00:55:33.761970 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-04-04 00:55:33.761974 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-04 00:55:33.761978 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:55:33.761982 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-04 00:55:33.761985 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-04 00:55:33.761989 | orchestrator | 2026-04-04 00:55:33.761993 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-04 00:55:33.761996 | orchestrator | Saturday 04 April 2026 00:52:48 +0000 (0:00:01.289) 0:07:15.047 ******** 2026-04-04 00:55:33.762000 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-04 00:55:33.762004 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-04 00:55:33.762008 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-04 00:55:33.762037 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-04 00:55:33.762043 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-04 00:55:33.762046 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-04 00:55:33.762050 | orchestrator | 2026-04-04 00:55:33.762054 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-04 00:55:33.762058 | orchestrator | Saturday 04 April 2026 00:52:51 +0000 (0:00:02.208) 0:07:17.255 ******** 2026-04-04 00:55:33.762061 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-04 00:55:33.762065 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-04 00:55:33.762072 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-04 00:55:33.762076 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-04 00:55:33.762079 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-04 00:55:33.762083 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-04 00:55:33.762087 | orchestrator | 2026-04-04 00:55:33.762091 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-04 00:55:33.762094 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:04.010) 0:07:21.266 ******** 2026-04-04 00:55:33.762098 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762102 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762105 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.762109 | orchestrator | 2026-04-04 00:55:33.762113 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-04 00:55:33.762120 | orchestrator | Saturday 04 April 2026 00:52:57 +0000 (0:00:02.478) 0:07:23.744 ******** 2026-04-04 00:55:33.762124 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762127 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762131 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-04 00:55:33.762135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.762139 | orchestrator | 2026-04-04 00:55:33.762142 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-04 00:55:33.762146 | orchestrator | Saturday 04 April 2026 00:53:10 +0000 (0:00:12.725) 0:07:36.470 ******** 2026-04-04 00:55:33.762150 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762153 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762157 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762161 | orchestrator | 2026-04-04 00:55:33.762165 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.762168 | orchestrator | Saturday 04 April 2026 00:53:11 +0000 (0:00:00.801) 0:07:37.271 ******** 2026-04-04 00:55:33.762172 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762176 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762180 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762183 | orchestrator | 2026-04-04 00:55:33.762187 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-04 00:55:33.762191 | orchestrator | Saturday 04 April 2026 00:53:11 +0000 (0:00:00.546) 0:07:37.818 ******** 2026-04-04 00:55:33.762194 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.762198 | orchestrator | 2026-04-04 00:55:33.762202 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-04 00:55:33.762206 | orchestrator | Saturday 04 April 2026 00:53:12 +0000 (0:00:00.515) 0:07:38.334 ******** 2026-04-04 00:55:33.762209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.762213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.762217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.762223 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762227 | orchestrator | 2026-04-04 00:55:33.762230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-04 00:55:33.762234 | orchestrator | Saturday 04 April 2026 00:53:12 +0000 (0:00:00.373) 0:07:38.707 ******** 2026-04-04 00:55:33.762238 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762242 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762245 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762249 | orchestrator | 2026-04-04 00:55:33.762253 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-04 00:55:33.762256 | orchestrator | Saturday 04 April 2026 00:53:12 +0000 (0:00:00.283) 0:07:38.991 ******** 2026-04-04 00:55:33.762260 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762264 | orchestrator | 2026-04-04 00:55:33.762268 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-04 00:55:33.762271 | orchestrator | Saturday 04 April 2026 00:53:12 +0000 (0:00:00.210) 0:07:39.201 ******** 2026-04-04 00:55:33.762275 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762279 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762282 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762286 | orchestrator | 2026-04-04 00:55:33.762290 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-04 00:55:33.762294 | orchestrator | Saturday 04 April 2026 00:53:13 +0000 (0:00:00.552) 0:07:39.753 ******** 2026-04-04 00:55:33.762297 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762301 | orchestrator | 2026-04-04 00:55:33.762305 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-04 00:55:33.762314 | orchestrator | Saturday 04 April 2026 00:53:13 +0000 (0:00:00.207) 0:07:39.961 ******** 2026-04-04 00:55:33.762317 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762321 | orchestrator | 2026-04-04 00:55:33.762325 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-04 00:55:33.762329 | orchestrator | Saturday 04 April 2026 00:53:13 +0000 (0:00:00.208) 0:07:40.170 ******** 2026-04-04 00:55:33.762332 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762336 | orchestrator | 2026-04-04 00:55:33.762340 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-04 00:55:33.762343 | orchestrator | Saturday 04 April 2026 00:53:14 +0000 (0:00:00.125) 0:07:40.296 ******** 2026-04-04 00:55:33.762347 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762351 | orchestrator | 2026-04-04 00:55:33.762355 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-04 00:55:33.762358 | orchestrator | Saturday 04 April 2026 00:53:14 +0000 (0:00:00.209) 0:07:40.505 ******** 2026-04-04 00:55:33.762362 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762366 | orchestrator | 2026-04-04 00:55:33.762369 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-04 00:55:33.762373 | orchestrator | Saturday 04 April 2026 00:53:14 +0000 (0:00:00.203) 0:07:40.708 ******** 2026-04-04 00:55:33.762379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.762383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.762387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.762391 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762394 | orchestrator | 2026-04-04 00:55:33.762398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-04 00:55:33.762402 | orchestrator | Saturday 04 April 2026 00:53:14 +0000 (0:00:00.383) 0:07:41.092 ******** 2026-04-04 00:55:33.762405 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762409 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762413 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762416 | orchestrator | 2026-04-04 00:55:33.762420 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-04 00:55:33.762424 | orchestrator | Saturday 04 April 2026 00:53:15 +0000 (0:00:00.410) 0:07:41.502 ******** 2026-04-04 00:55:33.762428 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762431 | orchestrator | 2026-04-04 00:55:33.762435 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-04 00:55:33.762439 | orchestrator | Saturday 04 April 2026 00:53:16 +0000 (0:00:00.797) 0:07:42.299 ******** 2026-04-04 00:55:33.762443 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762446 | orchestrator | 2026-04-04 00:55:33.762450 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-04 00:55:33.762454 | orchestrator | 2026-04-04 00:55:33.762457 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.762461 | orchestrator | Saturday 04 April 2026 00:53:16 +0000 (0:00:00.631) 0:07:42.930 ******** 2026-04-04 00:55:33.762465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.762470 | orchestrator | 2026-04-04 00:55:33.762474 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.762477 | orchestrator | Saturday 04 April 2026 00:53:17 +0000 (0:00:01.231) 0:07:44.162 ******** 2026-04-04 00:55:33.762481 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.762485 | orchestrator | 2026-04-04 00:55:33.762489 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.762493 | orchestrator | Saturday 04 April 2026 00:53:19 +0000 (0:00:01.130) 0:07:45.292 ******** 2026-04-04 00:55:33.762499 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762503 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762507 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762510 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.762514 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.762518 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.762522 | orchestrator | 2026-04-04 00:55:33.762525 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.762531 | orchestrator | Saturday 04 April 2026 00:53:20 +0000 (0:00:01.180) 0:07:46.473 ******** 2026-04-04 00:55:33.762535 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762539 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762543 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762546 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762550 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762554 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762557 | orchestrator | 2026-04-04 00:55:33.762561 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.762565 | orchestrator | Saturday 04 April 2026 00:53:20 +0000 (0:00:00.665) 0:07:47.138 ******** 2026-04-04 00:55:33.762569 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762572 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762576 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762580 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762583 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762587 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762591 | orchestrator | 2026-04-04 00:55:33.762594 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.762598 | orchestrator | Saturday 04 April 2026 00:53:21 +0000 (0:00:00.970) 0:07:48.108 ******** 2026-04-04 00:55:33.762602 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762605 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762609 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762613 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762616 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762620 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762624 | orchestrator | 2026-04-04 00:55:33.762627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.762631 | orchestrator | Saturday 04 April 2026 00:53:22 +0000 (0:00:00.742) 0:07:48.851 ******** 2026-04-04 00:55:33.762635 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762639 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762642 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762646 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.762650 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.762653 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.762657 | orchestrator | 2026-04-04 00:55:33.762661 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.762664 | orchestrator | Saturday 04 April 2026 00:53:23 +0000 (0:00:01.070) 0:07:49.921 ******** 2026-04-04 00:55:33.762668 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762672 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762676 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762679 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762683 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762687 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762690 | orchestrator | 2026-04-04 00:55:33.762694 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.762698 | orchestrator | Saturday 04 April 2026 00:53:24 +0000 (0:00:00.837) 0:07:50.758 ******** 2026-04-04 00:55:33.762701 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762707 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762711 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762715 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762722 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762726 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762729 | orchestrator | 2026-04-04 00:55:33.762733 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.762737 | orchestrator | Saturday 04 April 2026 00:53:25 +0000 (0:00:00.557) 0:07:51.316 ******** 2026-04-04 00:55:33.762740 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762744 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762748 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762752 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.762755 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.762759 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.762763 | orchestrator | 2026-04-04 00:55:33.762766 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.762770 | orchestrator | Saturday 04 April 2026 00:53:26 +0000 (0:00:01.358) 0:07:52.675 ******** 2026-04-04 00:55:33.762774 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762778 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762781 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762785 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.762789 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.762792 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.762796 | orchestrator | 2026-04-04 00:55:33.762800 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.762803 | orchestrator | Saturday 04 April 2026 00:53:27 +0000 (0:00:00.907) 0:07:53.582 ******** 2026-04-04 00:55:33.762807 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762811 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762815 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762830 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762833 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762837 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762841 | orchestrator | 2026-04-04 00:55:33.762845 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.762848 | orchestrator | Saturday 04 April 2026 00:53:28 +0000 (0:00:00.766) 0:07:54.348 ******** 2026-04-04 00:55:33.762852 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762856 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762859 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.762863 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.762867 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.762870 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.762874 | orchestrator | 2026-04-04 00:55:33.762878 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.762882 | orchestrator | Saturday 04 April 2026 00:53:28 +0000 (0:00:00.553) 0:07:54.902 ******** 2026-04-04 00:55:33.762885 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762889 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762893 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762897 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762900 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762904 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762908 | orchestrator | 2026-04-04 00:55:33.762914 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.762918 | orchestrator | Saturday 04 April 2026 00:53:29 +0000 (0:00:00.785) 0:07:55.688 ******** 2026-04-04 00:55:33.762921 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762925 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762929 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762933 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762936 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762940 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762944 | orchestrator | 2026-04-04 00:55:33.762948 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.762951 | orchestrator | Saturday 04 April 2026 00:53:30 +0000 (0:00:00.561) 0:07:56.250 ******** 2026-04-04 00:55:33.762957 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.762961 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.762965 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.762968 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.762972 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.762976 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.762979 | orchestrator | 2026-04-04 00:55:33.762983 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.762987 | orchestrator | Saturday 04 April 2026 00:53:30 +0000 (0:00:00.710) 0:07:56.961 ******** 2026-04-04 00:55:33.762991 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.762994 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.762998 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763002 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.763005 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.763009 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.763013 | orchestrator | 2026-04-04 00:55:33.763016 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.763020 | orchestrator | Saturday 04 April 2026 00:53:31 +0000 (0:00:00.503) 0:07:57.464 ******** 2026-04-04 00:55:33.763024 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763027 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763031 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763035 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:55:33.763038 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:55:33.763042 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:55:33.763046 | orchestrator | 2026-04-04 00:55:33.763049 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.763053 | orchestrator | Saturday 04 April 2026 00:53:31 +0000 (0:00:00.645) 0:07:58.109 ******** 2026-04-04 00:55:33.763057 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763060 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763064 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763068 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763071 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.763075 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.763079 | orchestrator | 2026-04-04 00:55:33.763083 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.763089 | orchestrator | Saturday 04 April 2026 00:53:32 +0000 (0:00:00.519) 0:07:58.629 ******** 2026-04-04 00:55:33.763093 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763097 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763100 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763104 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763108 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.763112 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.763115 | orchestrator | 2026-04-04 00:55:33.763119 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.763123 | orchestrator | Saturday 04 April 2026 00:53:33 +0000 (0:00:00.659) 0:07:59.288 ******** 2026-04-04 00:55:33.763126 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763130 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763134 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763137 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763141 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.763145 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.763148 | orchestrator | 2026-04-04 00:55:33.763152 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-04 00:55:33.763156 | orchestrator | Saturday 04 April 2026 00:53:34 +0000 (0:00:00.967) 0:08:00.256 ******** 2026-04-04 00:55:33.763160 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.763163 | orchestrator | 2026-04-04 00:55:33.763167 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-04 00:55:33.763174 | orchestrator | Saturday 04 April 2026 00:53:36 +0000 (0:00:02.607) 0:08:02.864 ******** 2026-04-04 00:55:33.763178 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.763182 | orchestrator | 2026-04-04 00:55:33.763185 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-04 00:55:33.763189 | orchestrator | Saturday 04 April 2026 00:53:38 +0000 (0:00:01.414) 0:08:04.278 ******** 2026-04-04 00:55:33.763193 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.763197 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.763201 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.763205 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763208 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.763212 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.763216 | orchestrator | 2026-04-04 00:55:33.763219 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-04 00:55:33.763223 | orchestrator | Saturday 04 April 2026 00:53:39 +0000 (0:00:01.415) 0:08:05.693 ******** 2026-04-04 00:55:33.763227 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.763230 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.763234 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.763238 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.763242 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.763245 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.763249 | orchestrator | 2026-04-04 00:55:33.763253 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-04 00:55:33.763257 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:01.184) 0:08:06.877 ******** 2026-04-04 00:55:33.763260 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.763265 | orchestrator | 2026-04-04 00:55:33.763272 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-04 00:55:33.763275 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:01.129) 0:08:08.007 ******** 2026-04-04 00:55:33.763279 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.763283 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.763287 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.763290 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.763294 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.763298 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.763301 | orchestrator | 2026-04-04 00:55:33.763305 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-04 00:55:33.763309 | orchestrator | Saturday 04 April 2026 00:53:43 +0000 (0:00:01.544) 0:08:09.552 ******** 2026-04-04 00:55:33.763313 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.763316 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.763320 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.763324 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.763327 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.763331 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.763335 | orchestrator | 2026-04-04 00:55:33.763338 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-04 00:55:33.763342 | orchestrator | Saturday 04 April 2026 00:53:47 +0000 (0:00:03.682) 0:08:13.234 ******** 2026-04-04 00:55:33.763346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:55:33.763350 | orchestrator | 2026-04-04 00:55:33.763354 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-04 00:55:33.763357 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.995) 0:08:14.229 ******** 2026-04-04 00:55:33.763361 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763365 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763372 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763375 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763379 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.763383 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.763386 | orchestrator | 2026-04-04 00:55:33.763390 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-04 00:55:33.763394 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.526) 0:08:14.756 ******** 2026-04-04 00:55:33.763398 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.763402 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.763405 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.763409 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:55:33.763413 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:55:33.763417 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:55:33.763420 | orchestrator | 2026-04-04 00:55:33.763424 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-04 00:55:33.763431 | orchestrator | Saturday 04 April 2026 00:53:50 +0000 (0:00:02.170) 0:08:16.926 ******** 2026-04-04 00:55:33.763435 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763439 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763443 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763446 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:55:33.763450 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:55:33.763454 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:55:33.763457 | orchestrator | 2026-04-04 00:55:33.763461 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-04 00:55:33.763465 | orchestrator | 2026-04-04 00:55:33.763469 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.763473 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.719) 0:08:17.646 ******** 2026-04-04 00:55:33.763477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.763481 | orchestrator | 2026-04-04 00:55:33.763484 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.763488 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.574) 0:08:18.220 ******** 2026-04-04 00:55:33.763492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.763496 | orchestrator | 2026-04-04 00:55:33.763499 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.763503 | orchestrator | Saturday 04 April 2026 00:53:52 +0000 (0:00:00.436) 0:08:18.657 ******** 2026-04-04 00:55:33.763507 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763511 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763515 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763518 | orchestrator | 2026-04-04 00:55:33.763522 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.763526 | orchestrator | Saturday 04 April 2026 00:53:52 +0000 (0:00:00.388) 0:08:19.045 ******** 2026-04-04 00:55:33.763530 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763533 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763537 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763541 | orchestrator | 2026-04-04 00:55:33.763544 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.763548 | orchestrator | Saturday 04 April 2026 00:53:53 +0000 (0:00:00.611) 0:08:19.656 ******** 2026-04-04 00:55:33.763552 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763556 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763559 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763563 | orchestrator | 2026-04-04 00:55:33.763567 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.763570 | orchestrator | Saturday 04 April 2026 00:53:54 +0000 (0:00:00.632) 0:08:20.289 ******** 2026-04-04 00:55:33.763574 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763580 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763584 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763588 | orchestrator | 2026-04-04 00:55:33.763592 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.763595 | orchestrator | Saturday 04 April 2026 00:53:54 +0000 (0:00:00.663) 0:08:20.952 ******** 2026-04-04 00:55:33.763602 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763606 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763610 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763613 | orchestrator | 2026-04-04 00:55:33.763617 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.763621 | orchestrator | Saturday 04 April 2026 00:53:55 +0000 (0:00:00.525) 0:08:21.478 ******** 2026-04-04 00:55:33.763625 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763629 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763633 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763636 | orchestrator | 2026-04-04 00:55:33.763640 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.763644 | orchestrator | Saturday 04 April 2026 00:53:55 +0000 (0:00:00.304) 0:08:21.782 ******** 2026-04-04 00:55:33.763648 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763651 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763655 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763659 | orchestrator | 2026-04-04 00:55:33.763663 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.763667 | orchestrator | Saturday 04 April 2026 00:53:55 +0000 (0:00:00.284) 0:08:22.067 ******** 2026-04-04 00:55:33.763670 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763674 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763678 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763682 | orchestrator | 2026-04-04 00:55:33.763685 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.763689 | orchestrator | Saturday 04 April 2026 00:53:56 +0000 (0:00:00.695) 0:08:22.763 ******** 2026-04-04 00:55:33.763693 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763697 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763700 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763704 | orchestrator | 2026-04-04 00:55:33.763708 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.763712 | orchestrator | Saturday 04 April 2026 00:53:57 +0000 (0:00:00.960) 0:08:23.724 ******** 2026-04-04 00:55:33.763716 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763719 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763723 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763727 | orchestrator | 2026-04-04 00:55:33.763731 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.763735 | orchestrator | Saturday 04 April 2026 00:53:57 +0000 (0:00:00.283) 0:08:24.007 ******** 2026-04-04 00:55:33.763738 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763742 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763746 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763749 | orchestrator | 2026-04-04 00:55:33.763753 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.763757 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.267) 0:08:24.275 ******** 2026-04-04 00:55:33.763761 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763764 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763770 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763775 | orchestrator | 2026-04-04 00:55:33.763779 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.763782 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.273) 0:08:24.549 ******** 2026-04-04 00:55:33.763786 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763790 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763794 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763800 | orchestrator | 2026-04-04 00:55:33.763804 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.763808 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.451) 0:08:25.001 ******** 2026-04-04 00:55:33.763812 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763815 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763833 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763837 | orchestrator | 2026-04-04 00:55:33.763841 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.763845 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.282) 0:08:25.284 ******** 2026-04-04 00:55:33.763849 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763852 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763856 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763860 | orchestrator | 2026-04-04 00:55:33.763864 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.763868 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.246) 0:08:25.530 ******** 2026-04-04 00:55:33.763872 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763875 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763879 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763883 | orchestrator | 2026-04-04 00:55:33.763887 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.763891 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.254) 0:08:25.784 ******** 2026-04-04 00:55:33.763894 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.763898 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763902 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763906 | orchestrator | 2026-04-04 00:55:33.763910 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.763913 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.436) 0:08:26.221 ******** 2026-04-04 00:55:33.763917 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763921 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763925 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763928 | orchestrator | 2026-04-04 00:55:33.763932 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.763936 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:00.273) 0:08:26.494 ******** 2026-04-04 00:55:33.763940 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.763944 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.763947 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.763951 | orchestrator | 2026-04-04 00:55:33.763955 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-04 00:55:33.763959 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:00.455) 0:08:26.949 ******** 2026-04-04 00:55:33.763963 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.763967 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.763973 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-04 00:55:33.763977 | orchestrator | 2026-04-04 00:55:33.763981 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-04 00:55:33.763985 | orchestrator | Saturday 04 April 2026 00:54:01 +0000 (0:00:00.485) 0:08:27.435 ******** 2026-04-04 00:55:33.763988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.763992 | orchestrator | 2026-04-04 00:55:33.763996 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-04 00:55:33.763999 | orchestrator | Saturday 04 April 2026 00:54:03 +0000 (0:00:01.831) 0:08:29.267 ******** 2026-04-04 00:55:33.764004 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-04 00:55:33.764010 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764017 | orchestrator | 2026-04-04 00:55:33.764021 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-04 00:55:33.764025 | orchestrator | Saturday 04 April 2026 00:54:03 +0000 (0:00:00.196) 0:08:29.464 ******** 2026-04-04 00:55:33.764032 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:33.764041 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:33.764045 | orchestrator | 2026-04-04 00:55:33.764049 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-04 00:55:33.764052 | orchestrator | Saturday 04 April 2026 00:54:09 +0000 (0:00:06.166) 0:08:35.630 ******** 2026-04-04 00:55:33.764056 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:55:33.764060 | orchestrator | 2026-04-04 00:55:33.764064 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-04 00:55:33.764068 | orchestrator | Saturday 04 April 2026 00:54:11 +0000 (0:00:02.585) 0:08:38.216 ******** 2026-04-04 00:55:33.764074 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764078 | orchestrator | 2026-04-04 00:55:33.764082 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-04 00:55:33.764086 | orchestrator | Saturday 04 April 2026 00:54:12 +0000 (0:00:00.775) 0:08:38.991 ******** 2026-04-04 00:55:33.764089 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:55:33.764093 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:55:33.764097 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:55:33.764101 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-04 00:55:33.764104 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-04 00:55:33.764108 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-04 00:55:33.764112 | orchestrator | 2026-04-04 00:55:33.764115 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-04 00:55:33.764119 | orchestrator | Saturday 04 April 2026 00:54:14 +0000 (0:00:01.396) 0:08:40.387 ******** 2026-04-04 00:55:33.764123 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.764126 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.764130 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.764134 | orchestrator | 2026-04-04 00:55:33.764138 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:55:33.764141 | orchestrator | Saturday 04 April 2026 00:54:16 +0000 (0:00:02.104) 0:08:42.492 ******** 2026-04-04 00:55:33.764145 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:55:33.764149 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.764153 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764156 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:55:33.764160 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:55:33.764164 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764168 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:55:33.764171 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:55:33.764175 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764179 | orchestrator | 2026-04-04 00:55:33.764183 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-04 00:55:33.764200 | orchestrator | Saturday 04 April 2026 00:54:17 +0000 (0:00:01.421) 0:08:43.914 ******** 2026-04-04 00:55:33.764204 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764208 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764217 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764221 | orchestrator | 2026-04-04 00:55:33.764225 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-04 00:55:33.764229 | orchestrator | Saturday 04 April 2026 00:54:19 +0000 (0:00:02.124) 0:08:46.038 ******** 2026-04-04 00:55:33.764233 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764236 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764244 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764248 | orchestrator | 2026-04-04 00:55:33.764252 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-04 00:55:33.764256 | orchestrator | Saturday 04 April 2026 00:54:20 +0000 (0:00:00.580) 0:08:46.619 ******** 2026-04-04 00:55:33.764260 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764263 | orchestrator | 2026-04-04 00:55:33.764267 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-04 00:55:33.764271 | orchestrator | Saturday 04 April 2026 00:54:20 +0000 (0:00:00.526) 0:08:47.145 ******** 2026-04-04 00:55:33.764275 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764279 | orchestrator | 2026-04-04 00:55:33.764282 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-04 00:55:33.764286 | orchestrator | Saturday 04 April 2026 00:54:21 +0000 (0:00:00.688) 0:08:47.833 ******** 2026-04-04 00:55:33.764290 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764294 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764298 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764301 | orchestrator | 2026-04-04 00:55:33.764305 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-04 00:55:33.764309 | orchestrator | Saturday 04 April 2026 00:54:23 +0000 (0:00:01.449) 0:08:49.283 ******** 2026-04-04 00:55:33.764313 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764316 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764320 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764324 | orchestrator | 2026-04-04 00:55:33.764328 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-04 00:55:33.764332 | orchestrator | Saturday 04 April 2026 00:54:24 +0000 (0:00:01.285) 0:08:50.568 ******** 2026-04-04 00:55:33.764336 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764339 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764343 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764347 | orchestrator | 2026-04-04 00:55:33.764350 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-04 00:55:33.764354 | orchestrator | Saturday 04 April 2026 00:54:26 +0000 (0:00:02.191) 0:08:52.760 ******** 2026-04-04 00:55:33.764358 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764362 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764365 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764369 | orchestrator | 2026-04-04 00:55:33.764373 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-04 00:55:33.764377 | orchestrator | Saturday 04 April 2026 00:54:28 +0000 (0:00:02.143) 0:08:54.903 ******** 2026-04-04 00:55:33.764381 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764385 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764388 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764392 | orchestrator | 2026-04-04 00:55:33.764398 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.764403 | orchestrator | Saturday 04 April 2026 00:54:29 +0000 (0:00:01.092) 0:08:55.995 ******** 2026-04-04 00:55:33.764406 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764413 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764417 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764420 | orchestrator | 2026-04-04 00:55:33.764424 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-04 00:55:33.764428 | orchestrator | Saturday 04 April 2026 00:54:30 +0000 (0:00:00.956) 0:08:56.952 ******** 2026-04-04 00:55:33.764432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764436 | orchestrator | 2026-04-04 00:55:33.764439 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-04 00:55:33.764443 | orchestrator | Saturday 04 April 2026 00:54:31 +0000 (0:00:00.515) 0:08:57.467 ******** 2026-04-04 00:55:33.764447 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764451 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764454 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764458 | orchestrator | 2026-04-04 00:55:33.764462 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-04 00:55:33.764466 | orchestrator | Saturday 04 April 2026 00:54:31 +0000 (0:00:00.280) 0:08:57.748 ******** 2026-04-04 00:55:33.764469 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.764473 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.764477 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.764481 | orchestrator | 2026-04-04 00:55:33.764484 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-04 00:55:33.764488 | orchestrator | Saturday 04 April 2026 00:54:32 +0000 (0:00:01.331) 0:08:59.080 ******** 2026-04-04 00:55:33.764492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.764496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.764499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.764503 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764507 | orchestrator | 2026-04-04 00:55:33.764511 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-04 00:55:33.764514 | orchestrator | Saturday 04 April 2026 00:54:33 +0000 (0:00:00.625) 0:08:59.705 ******** 2026-04-04 00:55:33.764518 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764522 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764526 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764529 | orchestrator | 2026-04-04 00:55:33.764533 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-04 00:55:33.764537 | orchestrator | 2026-04-04 00:55:33.764541 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:55:33.764544 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:00.572) 0:09:00.277 ******** 2026-04-04 00:55:33.764548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764552 | orchestrator | 2026-04-04 00:55:33.764558 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:55:33.764562 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:00.758) 0:09:01.036 ******** 2026-04-04 00:55:33.764565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.764569 | orchestrator | 2026-04-04 00:55:33.764573 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:55:33.764577 | orchestrator | Saturday 04 April 2026 00:54:35 +0000 (0:00:00.530) 0:09:01.566 ******** 2026-04-04 00:55:33.764580 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764584 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764588 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764592 | orchestrator | 2026-04-04 00:55:33.764596 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:55:33.764599 | orchestrator | Saturday 04 April 2026 00:54:35 +0000 (0:00:00.494) 0:09:02.061 ******** 2026-04-04 00:55:33.764606 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764610 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764614 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764618 | orchestrator | 2026-04-04 00:55:33.764621 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:55:33.764625 | orchestrator | Saturday 04 April 2026 00:54:36 +0000 (0:00:00.747) 0:09:02.808 ******** 2026-04-04 00:55:33.764629 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764633 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764637 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764640 | orchestrator | 2026-04-04 00:55:33.764644 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:55:33.764648 | orchestrator | Saturday 04 April 2026 00:54:37 +0000 (0:00:00.817) 0:09:03.626 ******** 2026-04-04 00:55:33.764652 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764657 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764661 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764665 | orchestrator | 2026-04-04 00:55:33.764669 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:55:33.764672 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:00.710) 0:09:04.336 ******** 2026-04-04 00:55:33.764676 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764680 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764684 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764687 | orchestrator | 2026-04-04 00:55:33.764691 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:55:33.764695 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:00.515) 0:09:04.852 ******** 2026-04-04 00:55:33.764699 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764702 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764706 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764710 | orchestrator | 2026-04-04 00:55:33.764714 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:55:33.764720 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:00.285) 0:09:05.137 ******** 2026-04-04 00:55:33.764724 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764728 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764731 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764735 | orchestrator | 2026-04-04 00:55:33.764739 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:55:33.764743 | orchestrator | Saturday 04 April 2026 00:54:39 +0000 (0:00:00.295) 0:09:05.432 ******** 2026-04-04 00:55:33.764747 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764750 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764754 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764758 | orchestrator | 2026-04-04 00:55:33.764762 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:55:33.764765 | orchestrator | Saturday 04 April 2026 00:54:39 +0000 (0:00:00.754) 0:09:06.187 ******** 2026-04-04 00:55:33.764769 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764773 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764777 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764780 | orchestrator | 2026-04-04 00:55:33.764784 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:55:33.764788 | orchestrator | Saturday 04 April 2026 00:54:41 +0000 (0:00:01.211) 0:09:07.399 ******** 2026-04-04 00:55:33.764792 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764795 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764799 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764803 | orchestrator | 2026-04-04 00:55:33.764807 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:55:33.764810 | orchestrator | Saturday 04 April 2026 00:54:41 +0000 (0:00:00.299) 0:09:07.698 ******** 2026-04-04 00:55:33.764814 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764879 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764887 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764891 | orchestrator | 2026-04-04 00:55:33.764895 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:55:33.764899 | orchestrator | Saturday 04 April 2026 00:54:41 +0000 (0:00:00.288) 0:09:07.987 ******** 2026-04-04 00:55:33.764902 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764906 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764910 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764913 | orchestrator | 2026-04-04 00:55:33.764917 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:55:33.764921 | orchestrator | Saturday 04 April 2026 00:54:42 +0000 (0:00:00.324) 0:09:08.311 ******** 2026-04-04 00:55:33.764925 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764928 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764932 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764936 | orchestrator | 2026-04-04 00:55:33.764940 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:55:33.764943 | orchestrator | Saturday 04 April 2026 00:54:42 +0000 (0:00:00.646) 0:09:08.958 ******** 2026-04-04 00:55:33.764947 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.764951 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.764955 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.764958 | orchestrator | 2026-04-04 00:55:33.764962 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:55:33.764969 | orchestrator | Saturday 04 April 2026 00:54:43 +0000 (0:00:00.389) 0:09:09.347 ******** 2026-04-04 00:55:33.764973 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764977 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.764980 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.764984 | orchestrator | 2026-04-04 00:55:33.764988 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:55:33.764991 | orchestrator | Saturday 04 April 2026 00:54:43 +0000 (0:00:00.322) 0:09:09.669 ******** 2026-04-04 00:55:33.764995 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.764999 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765003 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765006 | orchestrator | 2026-04-04 00:55:33.765010 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:55:33.765014 | orchestrator | Saturday 04 April 2026 00:54:43 +0000 (0:00:00.293) 0:09:09.963 ******** 2026-04-04 00:55:33.765018 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765021 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765025 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765029 | orchestrator | 2026-04-04 00:55:33.765033 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:55:33.765037 | orchestrator | Saturday 04 April 2026 00:54:44 +0000 (0:00:00.423) 0:09:10.386 ******** 2026-04-04 00:55:33.765040 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.765044 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.765048 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.765051 | orchestrator | 2026-04-04 00:55:33.765055 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:55:33.765059 | orchestrator | Saturday 04 April 2026 00:54:44 +0000 (0:00:00.297) 0:09:10.684 ******** 2026-04-04 00:55:33.765063 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.765067 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.765070 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.765074 | orchestrator | 2026-04-04 00:55:33.765078 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-04 00:55:33.765082 | orchestrator | Saturday 04 April 2026 00:54:44 +0000 (0:00:00.437) 0:09:11.121 ******** 2026-04-04 00:55:33.765085 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.765089 | orchestrator | 2026-04-04 00:55:33.765093 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-04 00:55:33.765100 | orchestrator | Saturday 04 April 2026 00:54:45 +0000 (0:00:00.569) 0:09:11.690 ******** 2026-04-04 00:55:33.765104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765108 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.765111 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.765115 | orchestrator | 2026-04-04 00:55:33.765123 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:55:33.765127 | orchestrator | Saturday 04 April 2026 00:54:47 +0000 (0:00:01.597) 0:09:13.288 ******** 2026-04-04 00:55:33.765131 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:55:33.765134 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:55:33.765138 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.765142 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:55:33.765146 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:55:33.765149 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.765153 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:55:33.765157 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:55:33.765161 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.765164 | orchestrator | 2026-04-04 00:55:33.765168 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-04 00:55:33.765172 | orchestrator | Saturday 04 April 2026 00:54:48 +0000 (0:00:01.217) 0:09:14.505 ******** 2026-04-04 00:55:33.765175 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765179 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765183 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765186 | orchestrator | 2026-04-04 00:55:33.765190 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-04 00:55:33.765194 | orchestrator | Saturday 04 April 2026 00:54:48 +0000 (0:00:00.340) 0:09:14.846 ******** 2026-04-04 00:55:33.765198 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.765202 | orchestrator | 2026-04-04 00:55:33.765206 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-04 00:55:33.765209 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:00.598) 0:09:15.445 ******** 2026-04-04 00:55:33.765213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765217 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765221 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765225 | orchestrator | 2026-04-04 00:55:33.765229 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-04 00:55:33.765232 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:00.763) 0:09:16.208 ******** 2026-04-04 00:55:33.765236 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765242 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:55:33.765246 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765250 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:55:33.765253 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765257 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:55:33.765264 | orchestrator | 2026-04-04 00:55:33.765268 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-04 00:55:33.765272 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:04.464) 0:09:20.673 ******** 2026-04-04 00:55:33.765275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765279 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.765283 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765286 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.765290 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:33.765294 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:33.765298 | orchestrator | 2026-04-04 00:55:33.765301 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:55:33.765305 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:02.176) 0:09:22.849 ******** 2026-04-04 00:55:33.765309 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:55:33.765313 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.765316 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:55:33.765320 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.765324 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:55:33.765328 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.765331 | orchestrator | 2026-04-04 00:55:33.765335 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-04 00:55:33.765339 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:01.104) 0:09:23.953 ******** 2026-04-04 00:55:33.765343 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-04 00:55:33.765346 | orchestrator | 2026-04-04 00:55:33.765350 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-04 00:55:33.765354 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.372) 0:09:24.326 ******** 2026-04-04 00:55:33.765360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765379 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765383 | orchestrator | 2026-04-04 00:55:33.765387 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-04 00:55:33.765390 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.601) 0:09:24.927 ******** 2026-04-04 00:55:33.765394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:55:33.765415 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765419 | orchestrator | 2026-04-04 00:55:33.765423 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-04 00:55:33.765427 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:00.775) 0:09:25.703 ******** 2026-04-04 00:55:33.765430 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:55:33.765434 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:55:33.765440 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:55:33.765444 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:55:33.765448 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:55:33.765452 | orchestrator | 2026-04-04 00:55:33.765455 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-04 00:55:33.765459 | orchestrator | Saturday 04 April 2026 00:55:19 +0000 (0:00:19.973) 0:09:45.677 ******** 2026-04-04 00:55:33.765463 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765467 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765470 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765474 | orchestrator | 2026-04-04 00:55:33.765478 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-04 00:55:33.765481 | orchestrator | Saturday 04 April 2026 00:55:19 +0000 (0:00:00.297) 0:09:45.974 ******** 2026-04-04 00:55:33.765485 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765489 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765493 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765496 | orchestrator | 2026-04-04 00:55:33.765500 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-04 00:55:33.765504 | orchestrator | Saturday 04 April 2026 00:55:20 +0000 (0:00:00.526) 0:09:46.500 ******** 2026-04-04 00:55:33.765507 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.765511 | orchestrator | 2026-04-04 00:55:33.765515 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-04 00:55:33.765518 | orchestrator | Saturday 04 April 2026 00:55:20 +0000 (0:00:00.517) 0:09:47.018 ******** 2026-04-04 00:55:33.765522 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.765526 | orchestrator | 2026-04-04 00:55:33.765530 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-04 00:55:33.765533 | orchestrator | Saturday 04 April 2026 00:55:21 +0000 (0:00:00.682) 0:09:47.701 ******** 2026-04-04 00:55:33.765537 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.765541 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.765545 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.765548 | orchestrator | 2026-04-04 00:55:33.765552 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-04 00:55:33.765558 | orchestrator | Saturday 04 April 2026 00:55:22 +0000 (0:00:01.148) 0:09:48.849 ******** 2026-04-04 00:55:33.765562 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.765565 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.765569 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.765576 | orchestrator | 2026-04-04 00:55:33.765579 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-04 00:55:33.765583 | orchestrator | Saturday 04 April 2026 00:55:23 +0000 (0:00:01.041) 0:09:49.891 ******** 2026-04-04 00:55:33.765587 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:55:33.765591 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:55:33.765594 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:55:33.765598 | orchestrator | 2026-04-04 00:55:33.765602 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-04 00:55:33.765605 | orchestrator | Saturday 04 April 2026 00:55:25 +0000 (0:00:01.839) 0:09:51.730 ******** 2026-04-04 00:55:33.765609 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765613 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765617 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:55:33.765620 | orchestrator | 2026-04-04 00:55:33.765624 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:55:33.765628 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:02.683) 0:09:54.414 ******** 2026-04-04 00:55:33.765632 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765635 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765639 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765643 | orchestrator | 2026-04-04 00:55:33.765646 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-04 00:55:33.765650 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:00.317) 0:09:54.732 ******** 2026-04-04 00:55:33.765654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:33.765658 | orchestrator | 2026-04-04 00:55:33.765661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-04 00:55:33.765665 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.745) 0:09:55.477 ******** 2026-04-04 00:55:33.765669 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.765673 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.765676 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.765680 | orchestrator | 2026-04-04 00:55:33.765684 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-04 00:55:33.765688 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.283) 0:09:55.761 ******** 2026-04-04 00:55:33.765691 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765695 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:33.765699 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:33.765702 | orchestrator | 2026-04-04 00:55:33.765708 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-04 00:55:33.765712 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.311) 0:09:56.072 ******** 2026-04-04 00:55:33.765716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:33.765720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:33.765723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:33.765727 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:33.765731 | orchestrator | 2026-04-04 00:55:33.765735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-04 00:55:33.765738 | orchestrator | Saturday 04 April 2026 00:55:30 +0000 (0:00:01.102) 0:09:57.175 ******** 2026-04-04 00:55:33.765742 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:33.765746 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:33.765750 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:33.765753 | orchestrator | 2026-04-04 00:55:33.765757 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:55:33.765765 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-04 00:55:33.765769 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-04 00:55:33.765772 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-04 00:55:33.765776 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-04 00:55:33.765780 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-04 00:55:33.765784 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-04 00:55:33.765787 | orchestrator | 2026-04-04 00:55:33.765791 | orchestrator | 2026-04-04 00:55:33.765795 | orchestrator | 2026-04-04 00:55:33.765799 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:55:33.765803 | orchestrator | Saturday 04 April 2026 00:55:31 +0000 (0:00:00.227) 0:09:57.402 ******** 2026-04-04 00:55:33.765806 | orchestrator | =============================================================================== 2026-04-04 00:55:33.765812 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 72.53s 2026-04-04 00:55:33.765816 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.56s 2026-04-04 00:55:33.765845 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.45s 2026-04-04 00:55:33.765849 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 19.97s 2026-04-04 00:55:33.765853 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.25s 2026-04-04 00:55:33.765857 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.73s 2026-04-04 00:55:33.765861 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.64s 2026-04-04 00:55:33.765865 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.73s 2026-04-04 00:55:33.765868 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.57s 2026-04-04 00:55:33.765872 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.17s 2026-04-04 00:55:33.765876 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.93s 2026-04-04 00:55:33.765880 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 5.87s 2026-04-04 00:55:33.765883 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.55s 2026-04-04 00:55:33.765887 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.46s 2026-04-04 00:55:33.765891 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.21s 2026-04-04 00:55:33.765894 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.01s 2026-04-04 00:55:33.765898 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.81s 2026-04-04 00:55:33.765902 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.68s 2026-04-04 00:55:33.765906 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.58s 2026-04-04 00:55:33.765910 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.13s 2026-04-04 00:55:33.765913 | orchestrator | 2026-04-04 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:36.786521 | orchestrator | 2026-04-04 00:55:36 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:36.788901 | orchestrator | 2026-04-04 00:55:36 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:36.790288 | orchestrator | 2026-04-04 00:55:36 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:36.790352 | orchestrator | 2026-04-04 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:39.825442 | orchestrator | 2026-04-04 00:55:39 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:39.826434 | orchestrator | 2026-04-04 00:55:39 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:39.828363 | orchestrator | 2026-04-04 00:55:39 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:39.828387 | orchestrator | 2026-04-04 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:42.866746 | orchestrator | 2026-04-04 00:55:42 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:42.866869 | orchestrator | 2026-04-04 00:55:42 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:42.868362 | orchestrator | 2026-04-04 00:55:42 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:42.868421 | orchestrator | 2026-04-04 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:45.972539 | orchestrator | 2026-04-04 00:55:45 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:45.974186 | orchestrator | 2026-04-04 00:55:45 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:45.976982 | orchestrator | 2026-04-04 00:55:45 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:45.977058 | orchestrator | 2026-04-04 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:49.002980 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:49.003730 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:49.004759 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:49.004791 | orchestrator | 2026-04-04 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:52.041755 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:52.043137 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:52.044431 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:52.044471 | orchestrator | 2026-04-04 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:55.092561 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:55.095497 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:55.097243 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:55.098485 | orchestrator | 2026-04-04 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:58.144298 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:55:58.145807 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:55:58.148213 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:55:58.148296 | orchestrator | 2026-04-04 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:01.200878 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:01.203171 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:01.205408 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:01.205605 | orchestrator | 2026-04-04 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:04.252413 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:04.255076 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:04.257007 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:04.257273 | orchestrator | 2026-04-04 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:07.296815 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:07.298944 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:07.301001 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:07.301059 | orchestrator | 2026-04-04 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:10.345006 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:10.345099 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:10.345655 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:10.345944 | orchestrator | 2026-04-04 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:13.396543 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:13.397551 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:13.399247 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:13.399358 | orchestrator | 2026-04-04 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:16.447596 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:16.449185 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:16.451769 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:16.452020 | orchestrator | 2026-04-04 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:19.493841 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:19.495387 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:19.497988 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:19.498094 | orchestrator | 2026-04-04 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:22.532222 | orchestrator | 2026-04-04 00:56:22 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:22.534537 | orchestrator | 2026-04-04 00:56:22 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:22.536515 | orchestrator | 2026-04-04 00:56:22 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:22.536596 | orchestrator | 2026-04-04 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:25.579476 | orchestrator | 2026-04-04 00:56:25 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:25.581604 | orchestrator | 2026-04-04 00:56:25 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:25.582975 | orchestrator | 2026-04-04 00:56:25 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:25.583017 | orchestrator | 2026-04-04 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:28.623772 | orchestrator | 2026-04-04 00:56:28 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:28.625639 | orchestrator | 2026-04-04 00:56:28 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:28.627805 | orchestrator | 2026-04-04 00:56:28 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:28.627854 | orchestrator | 2026-04-04 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:31.678567 | orchestrator | 2026-04-04 00:56:31 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:31.680147 | orchestrator | 2026-04-04 00:56:31 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:31.682116 | orchestrator | 2026-04-04 00:56:31 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:31.682162 | orchestrator | 2026-04-04 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:34.729030 | orchestrator | 2026-04-04 00:56:34 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:34.730443 | orchestrator | 2026-04-04 00:56:34 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:34.735259 | orchestrator | 2026-04-04 00:56:34 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:34.735306 | orchestrator | 2026-04-04 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:37.788692 | orchestrator | 2026-04-04 00:56:37 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:37.789994 | orchestrator | 2026-04-04 00:56:37 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:37.791484 | orchestrator | 2026-04-04 00:56:37 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:37.791538 | orchestrator | 2026-04-04 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:40.843060 | orchestrator | 2026-04-04 00:56:40 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:40.844482 | orchestrator | 2026-04-04 00:56:40 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:40.846244 | orchestrator | 2026-04-04 00:56:40 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:40.846288 | orchestrator | 2026-04-04 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:43.889127 | orchestrator | 2026-04-04 00:56:43 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:43.891341 | orchestrator | 2026-04-04 00:56:43 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:43.893595 | orchestrator | 2026-04-04 00:56:43 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:43.893649 | orchestrator | 2026-04-04 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:46.935581 | orchestrator | 2026-04-04 00:56:46 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:46.936677 | orchestrator | 2026-04-04 00:56:46 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:46.938345 | orchestrator | 2026-04-04 00:56:46 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:46.938793 | orchestrator | 2026-04-04 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:49.977363 | orchestrator | 2026-04-04 00:56:49 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:49.979583 | orchestrator | 2026-04-04 00:56:49 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:49.981690 | orchestrator | 2026-04-04 00:56:49 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:49.981823 | orchestrator | 2026-04-04 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:53.030553 | orchestrator | 2026-04-04 00:56:53 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:53.032875 | orchestrator | 2026-04-04 00:56:53 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:53.035673 | orchestrator | 2026-04-04 00:56:53 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:53.035825 | orchestrator | 2026-04-04 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:56.079544 | orchestrator | 2026-04-04 00:56:56 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:56.081139 | orchestrator | 2026-04-04 00:56:56 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:56.083591 | orchestrator | 2026-04-04 00:56:56 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:56.083661 | orchestrator | 2026-04-04 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:59.121244 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:56:59.122464 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:56:59.123915 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:56:59.123983 | orchestrator | 2026-04-04 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:02.169653 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state STARTED 2026-04-04 00:57:02.171166 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:02.173242 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:02.173311 | orchestrator | 2026-04-04 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:05.212484 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task e0df33f2-cb26-4707-b2a2-6d7c73fb839d is in state SUCCESS 2026-04-04 00:57:05.213347 | orchestrator | 2026-04-04 00:57:05.213422 | orchestrator | 2026-04-04 00:57:05.213442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:57:05.213452 | orchestrator | 2026-04-04 00:57:05.213458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:57:05.213465 | orchestrator | Saturday 04 April 2026 00:54:30 +0000 (0:00:00.326) 0:00:00.326 ******** 2026-04-04 00:57:05.213472 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:05.213480 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:05.213486 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:05.213492 | orchestrator | 2026-04-04 00:57:05.213498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:57:05.213504 | orchestrator | Saturday 04 April 2026 00:54:30 +0000 (0:00:00.313) 0:00:00.640 ******** 2026-04-04 00:57:05.213511 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-04 00:57:05.213518 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-04 00:57:05.213525 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-04 00:57:05.213532 | orchestrator | 2026-04-04 00:57:05.213539 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-04 00:57:05.213545 | orchestrator | 2026-04-04 00:57:05.213552 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:57:05.213559 | orchestrator | Saturday 04 April 2026 00:54:30 +0000 (0:00:00.286) 0:00:00.927 ******** 2026-04-04 00:57:05.213566 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:05.213573 | orchestrator | 2026-04-04 00:57:05.213579 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-04 00:57:05.213586 | orchestrator | Saturday 04 April 2026 00:54:31 +0000 (0:00:00.560) 0:00:01.487 ******** 2026-04-04 00:57:05.213593 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:57:05.213599 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:57:05.213604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:57:05.213611 | orchestrator | 2026-04-04 00:57:05.213618 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-04 00:57:05.213624 | orchestrator | Saturday 04 April 2026 00:54:33 +0000 (0:00:01.875) 0:00:03.363 ******** 2026-04-04 00:57:05.213634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.213801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.213811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.213823 | orchestrator | 2026-04-04 00:57:05.213830 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:57:05.213844 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:01.432) 0:00:04.795 ******** 2026-04-04 00:57:05.213856 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:05.213862 | orchestrator | 2026-04-04 00:57:05.213868 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-04 00:57:05.213878 | orchestrator | Saturday 04 April 2026 00:54:35 +0000 (0:00:00.474) 0:00:05.270 ******** 2026-04-04 00:57:05.213885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.213912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.213926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.213934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214200 | orchestrator | 2026-04-04 00:57:05.214213 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-04 00:57:05.214219 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:03.028) 0:00:08.298 ******** 2026-04-04 00:57:05.214224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214253 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214284 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:05.214292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214316 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:05.214323 | orchestrator | 2026-04-04 00:57:05.214331 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-04 00:57:05.214337 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:00.821) 0:00:09.120 ******** 2026-04-04 00:57:05.214344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214370 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214401 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:05.214407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214419 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:05.214425 | orchestrator | 2026-04-04 00:57:05.214432 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-04 00:57:05.214439 | orchestrator | Saturday 04 April 2026 00:54:39 +0000 (0:00:01.060) 0:00:10.181 ******** 2026-04-04 00:57:05.214449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214496 | orchestrator | 2026-04-04 00:57:05.214500 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-04 00:57:05.214504 | orchestrator | Saturday 04 April 2026 00:54:42 +0000 (0:00:02.847) 0:00:13.028 ******** 2026-04-04 00:57:05.214508 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:05.214512 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:05.214515 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.214519 | orchestrator | 2026-04-04 00:57:05.214523 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-04 00:57:05.214527 | orchestrator | Saturday 04 April 2026 00:54:45 +0000 (0:00:02.480) 0:00:15.508 ******** 2026-04-04 00:57:05.214531 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.214534 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:05.214542 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:05.214546 | orchestrator | 2026-04-04 00:57:05.214550 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-04 00:57:05.214554 | orchestrator | Saturday 04 April 2026 00:54:46 +0000 (0:00:01.459) 0:00:16.968 ******** 2026-04-04 00:57:05.214557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 00:57:05.214576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-04 00:57:05.214593 | orchestrator | 2026-04-04 00:57:05.214597 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-04 00:57:05.214601 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:02.391) 0:00:19.360 ******** 2026-04-04 00:57:05.214605 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:57:05.214609 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:05.214613 | orchestrator | } 2026-04-04 00:57:05.214617 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:57:05.214621 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:05.214625 | orchestrator | } 2026-04-04 00:57:05.214629 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:57:05.214632 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:05.214636 | orchestrator | } 2026-04-04 00:57:05.214640 | orchestrator | 2026-04-04 00:57:05.214644 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:57:05.214650 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:00.359) 0:00:19.719 ******** 2026-04-04 00:57:05.214654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214666 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214809 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:05.214813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 00:57:05.214822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-04 00:57:05.214826 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:05.214830 | orchestrator | 2026-04-04 00:57:05.214834 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:57:05.214839 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.822) 0:00:20.542 ******** 2026-04-04 00:57:05.214842 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214846 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:05.214850 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:05.214854 | orchestrator | 2026-04-04 00:57:05.214858 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:57:05.214862 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.300) 0:00:20.843 ******** 2026-04-04 00:57:05.214866 | orchestrator | 2026-04-04 00:57:05.214869 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:57:05.214873 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.057) 0:00:20.900 ******** 2026-04-04 00:57:05.214877 | orchestrator | 2026-04-04 00:57:05.214881 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:57:05.214885 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.057) 0:00:20.957 ******** 2026-04-04 00:57:05.214888 | orchestrator | 2026-04-04 00:57:05.214892 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-04 00:57:05.214896 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.157) 0:00:21.115 ******** 2026-04-04 00:57:05.214900 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214903 | orchestrator | 2026-04-04 00:57:05.214915 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-04 00:57:05.214919 | orchestrator | Saturday 04 April 2026 00:54:51 +0000 (0:00:00.168) 0:00:21.283 ******** 2026-04-04 00:57:05.214923 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:05.214927 | orchestrator | 2026-04-04 00:57:05.214935 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-04 00:57:05.214953 | orchestrator | Saturday 04 April 2026 00:54:51 +0000 (0:00:00.142) 0:00:21.425 ******** 2026-04-04 00:57:05.214957 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.214965 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:05.214971 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:05.214978 | orchestrator | 2026-04-04 00:57:05.214984 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-04 00:57:05.214990 | orchestrator | Saturday 04 April 2026 00:55:45 +0000 (0:00:54.457) 0:01:15.883 ******** 2026-04-04 00:57:05.214996 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.215002 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:05.215008 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:05.215013 | orchestrator | 2026-04-04 00:57:05.215020 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:57:05.215025 | orchestrator | Saturday 04 April 2026 00:56:48 +0000 (0:01:02.490) 0:02:18.373 ******** 2026-04-04 00:57:05.215035 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:05.215041 | orchestrator | 2026-04-04 00:57:05.215047 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-04 00:57:05.215053 | orchestrator | Saturday 04 April 2026 00:56:48 +0000 (0:00:00.615) 0:02:18.989 ******** 2026-04-04 00:57:05.215060 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:05.215066 | orchestrator | 2026-04-04 00:57:05.215072 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-04 00:57:05.215079 | orchestrator | Saturday 04 April 2026 00:56:51 +0000 (0:00:02.858) 0:02:21.848 ******** 2026-04-04 00:57:05.215083 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:05.215087 | orchestrator | 2026-04-04 00:57:05.215091 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-04 00:57:05.215094 | orchestrator | Saturday 04 April 2026 00:56:54 +0000 (0:00:02.573) 0:02:24.421 ******** 2026-04-04 00:57:05.215098 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:05.215102 | orchestrator | 2026-04-04 00:57:05.215106 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-04 00:57:05.215110 | orchestrator | Saturday 04 April 2026 00:56:56 +0000 (0:00:02.773) 0:02:27.195 ******** 2026-04-04 00:57:05.215113 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.215117 | orchestrator | 2026-04-04 00:57:05.215121 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-04 00:57:05.215125 | orchestrator | Saturday 04 April 2026 00:57:00 +0000 (0:00:03.158) 0:02:30.353 ******** 2026-04-04 00:57:05.215129 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:05.215133 | orchestrator | 2026-04-04 00:57:05.215137 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:05.215142 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 00:57:05.215147 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:57:05.215151 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:57:05.215155 | orchestrator | 2026-04-04 00:57:05.215159 | orchestrator | 2026-04-04 00:57:05.215162 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:05.215166 | orchestrator | Saturday 04 April 2026 00:57:03 +0000 (0:00:03.031) 0:02:33.385 ******** 2026-04-04 00:57:05.215170 | orchestrator | =============================================================================== 2026-04-04 00:57:05.215174 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 62.49s 2026-04-04 00:57:05.215178 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.46s 2026-04-04 00:57:05.215181 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.16s 2026-04-04 00:57:05.215185 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.03s 2026-04-04 00:57:05.215189 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.03s 2026-04-04 00:57:05.215199 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.86s 2026-04-04 00:57:05.215203 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.85s 2026-04-04 00:57:05.215207 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.77s 2026-04-04 00:57:05.215211 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.57s 2026-04-04 00:57:05.215214 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2026-04-04 00:57:05.215218 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.39s 2026-04-04 00:57:05.215222 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.88s 2026-04-04 00:57:05.215226 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.46s 2026-04-04 00:57:05.215230 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.43s 2026-04-04 00:57:05.215234 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-04-04 00:57:05.215237 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.82s 2026-04-04 00:57:05.215249 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.82s 2026-04-04 00:57:05.215253 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-04 00:57:05.215262 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-04 00:57:05.215270 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-04-04 00:57:05.215274 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:05.217723 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:05.217789 | orchestrator | 2026-04-04 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:08.271364 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:08.273166 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:08.273240 | orchestrator | 2026-04-04 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:11.316269 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:11.319137 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:11.319205 | orchestrator | 2026-04-04 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:14.361427 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:14.363008 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:14.363042 | orchestrator | 2026-04-04 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:17.402894 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:17.404252 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:17.404299 | orchestrator | 2026-04-04 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:20.444597 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:20.447679 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:20.447776 | orchestrator | 2026-04-04 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:23.496886 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:23.499402 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:23.499793 | orchestrator | 2026-04-04 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:26.540628 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:26.541592 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:26.541630 | orchestrator | 2026-04-04 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:29.586372 | orchestrator | 2026-04-04 00:57:29 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:29.588551 | orchestrator | 2026-04-04 00:57:29 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state STARTED 2026-04-04 00:57:29.588614 | orchestrator | 2026-04-04 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:32.637770 | orchestrator | 2026-04-04 00:57:32 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:32.641559 | orchestrator | 2026-04-04 00:57:32 | INFO  | Task d6e51d5e-985b-4621-bfe5-7737a62bc605 is in state SUCCESS 2026-04-04 00:57:32.643507 | orchestrator | 2026-04-04 00:57:32.643572 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:57:32.643582 | orchestrator | 2.16.14 2026-04-04 00:57:32.643590 | orchestrator | 2026-04-04 00:57:32.643596 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-04 00:57:32.643604 | orchestrator | 2026-04-04 00:57:32.643610 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-04 00:57:32.643616 | orchestrator | Saturday 04 April 2026 00:55:35 +0000 (0:00:00.437) 0:00:00.437 ******** 2026-04-04 00:57:32.643682 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:57:32.643692 | orchestrator | 2026-04-04 00:57:32.643699 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-04 00:57:32.643705 | orchestrator | Saturday 04 April 2026 00:55:36 +0000 (0:00:00.418) 0:00:00.856 ******** 2026-04-04 00:57:32.644031 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644042 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644048 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644055 | orchestrator | 2026-04-04 00:57:32.644061 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-04 00:57:32.644083 | orchestrator | Saturday 04 April 2026 00:55:37 +0000 (0:00:00.981) 0:00:01.838 ******** 2026-04-04 00:57:32.644090 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644096 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644102 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644108 | orchestrator | 2026-04-04 00:57:32.644114 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-04 00:57:32.644120 | orchestrator | Saturday 04 April 2026 00:55:37 +0000 (0:00:00.230) 0:00:02.068 ******** 2026-04-04 00:57:32.644127 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644133 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644139 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644144 | orchestrator | 2026-04-04 00:57:32.644151 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-04 00:57:32.644158 | orchestrator | Saturday 04 April 2026 00:55:38 +0000 (0:00:00.696) 0:00:02.765 ******** 2026-04-04 00:57:32.644165 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644171 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644198 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644204 | orchestrator | 2026-04-04 00:57:32.644210 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-04 00:57:32.644217 | orchestrator | Saturday 04 April 2026 00:55:38 +0000 (0:00:00.262) 0:00:03.027 ******** 2026-04-04 00:57:32.644224 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644230 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644236 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644241 | orchestrator | 2026-04-04 00:57:32.644247 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-04 00:57:32.644253 | orchestrator | Saturday 04 April 2026 00:55:38 +0000 (0:00:00.246) 0:00:03.274 ******** 2026-04-04 00:57:32.644259 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644265 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644270 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644276 | orchestrator | 2026-04-04 00:57:32.644282 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-04 00:57:32.644288 | orchestrator | Saturday 04 April 2026 00:55:38 +0000 (0:00:00.261) 0:00:03.535 ******** 2026-04-04 00:57:32.644294 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.644301 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.644308 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.644313 | orchestrator | 2026-04-04 00:57:32.644320 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-04 00:57:32.644326 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:00.374) 0:00:03.909 ******** 2026-04-04 00:57:32.644332 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644339 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644345 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644351 | orchestrator | 2026-04-04 00:57:32.644359 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-04 00:57:32.644366 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:00.263) 0:00:04.172 ******** 2026-04-04 00:57:32.644373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:57:32.644379 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:57:32.644387 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:57:32.644394 | orchestrator | 2026-04-04 00:57:32.644401 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-04 00:57:32.644408 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.547) 0:00:04.719 ******** 2026-04-04 00:57:32.644414 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644423 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644614 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644642 | orchestrator | 2026-04-04 00:57:32.644650 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-04 00:57:32.644656 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.359) 0:00:05.079 ******** 2026-04-04 00:57:32.644663 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:57:32.644670 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:57:32.644676 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:57:32.644682 | orchestrator | 2026-04-04 00:57:32.644689 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-04 00:57:32.644695 | orchestrator | Saturday 04 April 2026 00:55:43 +0000 (0:00:02.948) 0:00:08.028 ******** 2026-04-04 00:57:32.644702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:57:32.644708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:57:32.644715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:57:32.644720 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.644738 | orchestrator | 2026-04-04 00:57:32.644778 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-04 00:57:32.644785 | orchestrator | Saturday 04 April 2026 00:55:43 +0000 (0:00:00.370) 0:00:08.398 ******** 2026-04-04 00:57:32.644794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644825 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.644831 | orchestrator | 2026-04-04 00:57:32.644837 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-04 00:57:32.644844 | orchestrator | Saturday 04 April 2026 00:55:44 +0000 (0:00:00.660) 0:00:09.058 ******** 2026-04-04 00:57:32.644853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.644876 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.644882 | orchestrator | 2026-04-04 00:57:32.644889 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-04 00:57:32.644896 | orchestrator | Saturday 04 April 2026 00:55:44 +0000 (0:00:00.142) 0:00:09.200 ******** 2026-04-04 00:57:32.644904 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '52febfea2aff', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-04 00:55:41.273345', 'end': '2026-04-04 00:55:41.302713', 'delta': '0:00:00.029368', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['52febfea2aff'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-04 00:57:32.644913 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a42e5c440872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-04 00:55:42.247658', 'end': '2026-04-04 00:55:42.286699', 'delta': '0:00:00.039041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a42e5c440872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-04 00:57:32.644946 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'eaab33536c26', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-04 00:55:43.164064', 'end': '2026-04-04 00:55:43.207057', 'delta': '0:00:00.042993', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['eaab33536c26'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-04 00:57:32.644953 | orchestrator | 2026-04-04 00:57:32.644959 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-04 00:57:32.644969 | orchestrator | Saturday 04 April 2026 00:55:44 +0000 (0:00:00.276) 0:00:09.476 ******** 2026-04-04 00:57:32.644975 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.644981 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.644987 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.644992 | orchestrator | 2026-04-04 00:57:32.644998 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-04 00:57:32.645006 | orchestrator | Saturday 04 April 2026 00:55:45 +0000 (0:00:00.389) 0:00:09.866 ******** 2026-04-04 00:57:32.645013 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-04 00:57:32.645020 | orchestrator | 2026-04-04 00:57:32.645027 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-04 00:57:32.645034 | orchestrator | Saturday 04 April 2026 00:55:46 +0000 (0:00:01.368) 0:00:11.234 ******** 2026-04-04 00:57:32.645040 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645046 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645052 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645057 | orchestrator | 2026-04-04 00:57:32.645063 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-04 00:57:32.645070 | orchestrator | Saturday 04 April 2026 00:55:46 +0000 (0:00:00.340) 0:00:11.575 ******** 2026-04-04 00:57:32.645090 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645096 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645108 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645114 | orchestrator | 2026-04-04 00:57:32.645120 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:57:32.645126 | orchestrator | Saturday 04 April 2026 00:55:47 +0000 (0:00:00.516) 0:00:12.092 ******** 2026-04-04 00:57:32.645131 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645137 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645143 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645149 | orchestrator | 2026-04-04 00:57:32.645156 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-04 00:57:32.645162 | orchestrator | Saturday 04 April 2026 00:55:47 +0000 (0:00:00.457) 0:00:12.550 ******** 2026-04-04 00:57:32.645168 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.645174 | orchestrator | 2026-04-04 00:57:32.645181 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-04 00:57:32.645187 | orchestrator | Saturday 04 April 2026 00:55:48 +0000 (0:00:00.134) 0:00:12.684 ******** 2026-04-04 00:57:32.645193 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645209 | orchestrator | 2026-04-04 00:57:32.645215 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:57:32.645222 | orchestrator | Saturday 04 April 2026 00:55:48 +0000 (0:00:00.212) 0:00:12.896 ******** 2026-04-04 00:57:32.645228 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645235 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645241 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645246 | orchestrator | 2026-04-04 00:57:32.645252 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-04 00:57:32.645258 | orchestrator | Saturday 04 April 2026 00:55:48 +0000 (0:00:00.270) 0:00:13.167 ******** 2026-04-04 00:57:32.645264 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645270 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645276 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645282 | orchestrator | 2026-04-04 00:57:32.645289 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-04 00:57:32.645296 | orchestrator | Saturday 04 April 2026 00:55:48 +0000 (0:00:00.305) 0:00:13.472 ******** 2026-04-04 00:57:32.645303 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645309 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645315 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645321 | orchestrator | 2026-04-04 00:57:32.645328 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-04 00:57:32.645334 | orchestrator | Saturday 04 April 2026 00:55:49 +0000 (0:00:00.533) 0:00:14.005 ******** 2026-04-04 00:57:32.645341 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645347 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645354 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645363 | orchestrator | 2026-04-04 00:57:32.645371 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-04 00:57:32.645378 | orchestrator | Saturday 04 April 2026 00:55:49 +0000 (0:00:00.303) 0:00:14.309 ******** 2026-04-04 00:57:32.645384 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645391 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645398 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645405 | orchestrator | 2026-04-04 00:57:32.645411 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-04 00:57:32.645417 | orchestrator | Saturday 04 April 2026 00:55:49 +0000 (0:00:00.304) 0:00:14.613 ******** 2026-04-04 00:57:32.645424 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645430 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645437 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645473 | orchestrator | 2026-04-04 00:57:32.645480 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-04 00:57:32.645488 | orchestrator | Saturday 04 April 2026 00:55:50 +0000 (0:00:00.325) 0:00:14.939 ******** 2026-04-04 00:57:32.645494 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.645500 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.645507 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.645513 | orchestrator | 2026-04-04 00:57:32.645521 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-04 00:57:32.645528 | orchestrator | Saturday 04 April 2026 00:55:50 +0000 (0:00:00.538) 0:00:15.477 ******** 2026-04-04 00:57:32.645545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3', 'dm-uuid-LVM-wozvLOh456sUfn9PqWV2oYBmxucNglfIsRj4iQcmeGu13Yo668Xa1ie8B5Vp2zNd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf', 'dm-uuid-LVM-3GO6ulA2UCr79XQtMUmeGCQVwsfTCN3Q1E6l2EmACUpV8mUHmqxWcqJe2RaqaMTV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a', 'dm-uuid-LVM-pgeNJmKNp28pjV3fx86BCWc8wX4QALTFGsYLqbIr0gemBAC5etWKyA4QhGr3xbbZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b', 'dm-uuid-LVM-Zgd0Gt58TKykaDOn90TkpYikcAaeJTdGNTvvZQdWx20IpbH2fKcdqHJSe79cISTu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae', 'dm-uuid-LVM-I9mvQrhzD9WRmt2aKBMUg5i54orKM11aDq10QeDsfxP8JRu4O5JDaP1Hg8Rxd7hg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6', 'dm-uuid-LVM-e5jS3yC23cZhqTNE2Gedcepj8x5rLXlu5xWcQfH2U9iwJYpApQDbI8mCzpWfQznY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LPDpNU-e6eu-lfRM-x6KR-689B-8pfF-RCrCE6', 'scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c', 'scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xv41ba-sM5C-aoVy-fzVJ-f2Kt-Dddx-6eEUlG', 'scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10', 'scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:57:32.645942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3342fG-blzy-o6fy-UO4K-31rX-ThXL-EiYsBj', 'scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434', 'scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903', 'scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tb2xf4-QKmJ-XvbT-1Uvb-cQ8T-LMwd-9FcBoK', 'scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364', 'scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.645987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0', 'scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q6wpoU-SZHW-edcv-Crdi-vP9G-hz0J-rB1IPk', 'scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a', 'scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646076 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.646085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3BfRvu-NPKS-4GHk-tgZa-LaI8-IdqC-seyFLh', 'scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca', 'scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338', 'scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646099 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.646112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:57:32.646124 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.646130 | orchestrator | 2026-04-04 00:57:32.646136 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-04 00:57:32.646142 | orchestrator | Saturday 04 April 2026 00:55:51 +0000 (0:00:00.498) 0:00:15.975 ******** 2026-04-04 00:57:32.646153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3', 'dm-uuid-LVM-wozvLOh456sUfn9PqWV2oYBmxucNglfIsRj4iQcmeGu13Yo668Xa1ie8B5Vp2zNd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf', 'dm-uuid-LVM-3GO6ulA2UCr79XQtMUmeGCQVwsfTCN3Q1E6l2EmACUpV8mUHmqxWcqJe2RaqaMTV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646221 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a', 'dm-uuid-LVM-pgeNJmKNp28pjV3fx86BCWc8wX4QALTFGsYLqbIr0gemBAC5etWKyA4QhGr3xbbZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc287254-001b-4450-afd2-9bec2027ae79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3-osd--block--7fdc24e9--a76c--5276--a9f5--2fea7f78f0c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LPDpNU-e6eu-lfRM-x6KR-689B-8pfF-RCrCE6', 'scsi-0QEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c', 'scsi-SQEMU_QEMU_HARDDISK_c11eb6c9-bfbf-4293-bc40-9ec52317ad2c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b', 'dm-uuid-LVM-Zgd0Gt58TKykaDOn90TkpYikcAaeJTdGNTvvZQdWx20IpbH2fKcdqHJSe79cISTu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf-osd--block--ecc56a61--ea8b--515f--be54--1cf9bb6e81cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xv41ba-sM5C-aoVy-fzVJ-f2Kt-Dddx-6eEUlG', 'scsi-0QEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10', 'scsi-SQEMU_QEMU_HARDDISK_3b29289e-9d48-43bf-9ccb-2d527cba3b10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903', 'scsi-SQEMU_QEMU_HARDDISK_ab9c2046-b8c0-414f-97e1-5f0c3376e903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646346 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.646354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646382 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae', 'dm-uuid-LVM-I9mvQrhzD9WRmt2aKBMUg5i54orKM11aDq10QeDsfxP8JRu4O5JDaP1Hg8Rxd7hg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6', 'dm-uuid-LVM-e5jS3yC23cZhqTNE2Gedcepj8x5rLXlu5xWcQfH2U9iwJYpApQDbI8mCzpWfQznY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c9340f8-6bc1-41cf-8ec5-49feac56714d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b1fc2ad7--1445--5918--af09--c59800dad69a-osd--block--b1fc2ad7--1445--5918--af09--c59800dad69a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Q6wpoU-SZHW-edcv-Crdi-vP9G-hz0J-rB1IPk', 'scsi-0QEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a', 'scsi-SQEMU_QEMU_HARDDISK_3b28ae8d-20ef-4453-9e76-4b2c7e5aca9a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16', 'scsi-SQEMU_QEMU_HARDDISK_2edc74eb-d496-4371-809c-e00c1f1a3999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f8b2f720--8689--5378--93a8--1716210ee10b-osd--block--f8b2f720--8689--5378--93a8--1716210ee10b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3BfRvu-NPKS-4GHk-tgZa-LaI8-IdqC-seyFLh', 'scsi-0QEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca', 'scsi-SQEMU_QEMU_HARDDISK_0bfc49b0-6c75-49d4-a01c-0507cea22dca'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a8cb98ca--1bad--517a--917a--7c952ebb91ae-osd--block--a8cb98ca--1bad--517a--917a--7c952ebb91ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3342fG-blzy-o6fy-UO4K-31rX-ThXL-EiYsBj', 'scsi-0QEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434', 'scsi-SQEMU_QEMU_HARDDISK_fbd8dc74-d964-4e06-8b01-1da5dc54c434'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6-osd--block--0b8e88b0--25e2--5e5e--a9b3--eb58a1775db6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tb2xf4-QKmJ-XvbT-1Uvb-cQ8T-LMwd-9FcBoK', 'scsi-0QEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364', 'scsi-SQEMU_QEMU_HARDDISK_3688be93-9535-40e0-bcab-38dca1989364'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338', 'scsi-SQEMU_QEMU_HARDDISK_fd41852f-1b07-4466-8009-0d8f18f39338'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0', 'scsi-SQEMU_QEMU_HARDDISK_1f1f6a26-dade-427f-8374-af0cc4364dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:57:32.646601 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.646608 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.646613 | orchestrator | 2026-04-04 00:57:32.646618 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-04 00:57:32.646665 | orchestrator | Saturday 04 April 2026 00:55:51 +0000 (0:00:00.560) 0:00:16.536 ******** 2026-04-04 00:57:32.646674 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.646680 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.646686 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.646693 | orchestrator | 2026-04-04 00:57:32.646700 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-04 00:57:32.646711 | orchestrator | Saturday 04 April 2026 00:55:52 +0000 (0:00:00.628) 0:00:17.165 ******** 2026-04-04 00:57:32.646717 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.646723 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.646729 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.646736 | orchestrator | 2026-04-04 00:57:32.646742 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:57:32.646748 | orchestrator | Saturday 04 April 2026 00:55:52 +0000 (0:00:00.481) 0:00:17.647 ******** 2026-04-04 00:57:32.646753 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.646760 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.646766 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.646772 | orchestrator | 2026-04-04 00:57:32.646778 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:57:32.646784 | orchestrator | Saturday 04 April 2026 00:55:53 +0000 (0:00:00.666) 0:00:18.314 ******** 2026-04-04 00:57:32.646790 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.646796 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.646802 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.646809 | orchestrator | 2026-04-04 00:57:32.646815 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:57:32.646822 | orchestrator | Saturday 04 April 2026 00:55:54 +0000 (0:00:00.402) 0:00:18.716 ******** 2026-04-04 00:57:32.646835 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.646841 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.646847 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.646853 | orchestrator | 2026-04-04 00:57:32.646859 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:57:32.646865 | orchestrator | Saturday 04 April 2026 00:55:54 +0000 (0:00:00.478) 0:00:19.195 ******** 2026-04-04 00:57:32.646873 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.646880 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.646886 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.646892 | orchestrator | 2026-04-04 00:57:32.646899 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-04 00:57:32.646904 | orchestrator | Saturday 04 April 2026 00:55:54 +0000 (0:00:00.471) 0:00:19.666 ******** 2026-04-04 00:57:32.646911 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-04 00:57:32.646918 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-04 00:57:32.646924 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-04 00:57:32.646931 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-04 00:57:32.646937 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-04 00:57:32.646944 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-04 00:57:32.646951 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-04 00:57:32.646957 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-04 00:57:32.646964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-04 00:57:32.646972 | orchestrator | 2026-04-04 00:57:32.646978 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-04 00:57:32.646985 | orchestrator | Saturday 04 April 2026 00:55:55 +0000 (0:00:00.824) 0:00:20.490 ******** 2026-04-04 00:57:32.646992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:57:32.646999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:57:32.647005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:57:32.647012 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647018 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:57:32.647024 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:57:32.647030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:57:32.647036 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.647042 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:57:32.647047 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:57:32.647053 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:57:32.647059 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.647065 | orchestrator | 2026-04-04 00:57:32.647071 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-04 00:57:32.647076 | orchestrator | Saturday 04 April 2026 00:55:56 +0000 (0:00:00.328) 0:00:20.819 ******** 2026-04-04 00:57:32.647083 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:57:32.647089 | orchestrator | 2026-04-04 00:57:32.647095 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:57:32.647102 | orchestrator | Saturday 04 April 2026 00:55:56 +0000 (0:00:00.649) 0:00:21.468 ******** 2026-04-04 00:57:32.647117 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647123 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.647129 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.647134 | orchestrator | 2026-04-04 00:57:32.647140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:57:32.647146 | orchestrator | Saturday 04 April 2026 00:55:57 +0000 (0:00:00.296) 0:00:21.765 ******** 2026-04-04 00:57:32.647158 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647163 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.647166 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.647170 | orchestrator | 2026-04-04 00:57:32.647174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:57:32.647178 | orchestrator | Saturday 04 April 2026 00:55:57 +0000 (0:00:00.280) 0:00:22.046 ******** 2026-04-04 00:57:32.647182 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647186 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.647190 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:57:32.647193 | orchestrator | 2026-04-04 00:57:32.647198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:57:32.647206 | orchestrator | Saturday 04 April 2026 00:55:57 +0000 (0:00:00.286) 0:00:22.332 ******** 2026-04-04 00:57:32.647210 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.647214 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.647218 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.647221 | orchestrator | 2026-04-04 00:57:32.647225 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:57:32.647229 | orchestrator | Saturday 04 April 2026 00:55:58 +0000 (0:00:00.528) 0:00:22.861 ******** 2026-04-04 00:57:32.647233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:57:32.647237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:57:32.647241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:57:32.647248 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647254 | orchestrator | 2026-04-04 00:57:32.647260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:57:32.647266 | orchestrator | Saturday 04 April 2026 00:55:58 +0000 (0:00:00.365) 0:00:23.226 ******** 2026-04-04 00:57:32.647272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:57:32.647277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:57:32.647283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:57:32.647289 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647295 | orchestrator | 2026-04-04 00:57:32.647302 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:57:32.647309 | orchestrator | Saturday 04 April 2026 00:55:58 +0000 (0:00:00.351) 0:00:23.578 ******** 2026-04-04 00:57:32.647315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:57:32.647322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:57:32.647329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:57:32.647333 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647336 | orchestrator | 2026-04-04 00:57:32.647340 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:57:32.647344 | orchestrator | Saturday 04 April 2026 00:55:59 +0000 (0:00:00.362) 0:00:23.940 ******** 2026-04-04 00:57:32.647348 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:32.647352 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:32.647356 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:32.647360 | orchestrator | 2026-04-04 00:57:32.647363 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:57:32.647367 | orchestrator | Saturday 04 April 2026 00:55:59 +0000 (0:00:00.286) 0:00:24.228 ******** 2026-04-04 00:57:32.647371 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:57:32.647376 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:57:32.647379 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:57:32.647383 | orchestrator | 2026-04-04 00:57:32.647387 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-04 00:57:32.647391 | orchestrator | Saturday 04 April 2026 00:56:00 +0000 (0:00:00.485) 0:00:24.713 ******** 2026-04-04 00:57:32.647403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:57:32.647407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:57:32.647411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:57:32.647415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:57:32.647420 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:57:32.647424 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:57:32.647427 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:57:32.647431 | orchestrator | 2026-04-04 00:57:32.647435 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-04 00:57:32.647439 | orchestrator | Saturday 04 April 2026 00:56:00 +0000 (0:00:00.953) 0:00:25.666 ******** 2026-04-04 00:57:32.647443 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:57:32.647447 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:57:32.647451 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:57:32.647455 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:57:32.647459 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:57:32.647463 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:57:32.647471 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:57:32.647475 | orchestrator | 2026-04-04 00:57:32.647479 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-04 00:57:32.647483 | orchestrator | Saturday 04 April 2026 00:56:02 +0000 (0:00:01.860) 0:00:27.527 ******** 2026-04-04 00:57:32.647487 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:57:32.647491 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:57:32.647495 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-04 00:57:32.647499 | orchestrator | 2026-04-04 00:57:32.647503 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-04 00:57:32.647507 | orchestrator | Saturday 04 April 2026 00:56:03 +0000 (0:00:00.359) 0:00:27.886 ******** 2026-04-04 00:57:32.647516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:57:32.647523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:57:32.647527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:57:32.647531 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:57:32.647534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:57:32.647543 | orchestrator | 2026-04-04 00:57:32.647547 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-04 00:57:32.647551 | orchestrator | Saturday 04 April 2026 00:56:42 +0000 (0:00:38.823) 0:01:06.710 ******** 2026-04-04 00:57:32.647555 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647567 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647575 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647578 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-04 00:57:32.647582 | orchestrator | 2026-04-04 00:57:32.647586 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-04 00:57:32.647590 | orchestrator | Saturday 04 April 2026 00:57:01 +0000 (0:00:19.781) 0:01:26.491 ******** 2026-04-04 00:57:32.647594 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647598 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647602 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647605 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647613 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647617 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:57:32.647621 | orchestrator | 2026-04-04 00:57:32.647642 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-04 00:57:32.647650 | orchestrator | Saturday 04 April 2026 00:57:11 +0000 (0:00:09.522) 0:01:36.014 ******** 2026-04-04 00:57:32.647657 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647661 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647665 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647669 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647673 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647680 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647684 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647688 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647691 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647695 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647699 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647703 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647707 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647711 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647719 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:57:32.647732 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:57:32.647736 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:57:32.647740 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-04 00:57:32.647744 | orchestrator | 2026-04-04 00:57:32.647748 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:32.647751 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-04 00:57:32.647758 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-04 00:57:32.647762 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-04 00:57:32.647766 | orchestrator | 2026-04-04 00:57:32.647770 | orchestrator | 2026-04-04 00:57:32.647773 | orchestrator | 2026-04-04 00:57:32.647777 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:32.647781 | orchestrator | Saturday 04 April 2026 00:57:29 +0000 (0:00:18.169) 0:01:54.184 ******** 2026-04-04 00:57:32.647785 | orchestrator | =============================================================================== 2026-04-04 00:57:32.647789 | orchestrator | create openstack pool(s) ----------------------------------------------- 38.82s 2026-04-04 00:57:32.647793 | orchestrator | generate keys ---------------------------------------------------------- 19.78s 2026-04-04 00:57:32.647796 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.17s 2026-04-04 00:57:32.647800 | orchestrator | get keys from monitors -------------------------------------------------- 9.52s 2026-04-04 00:57:32.647804 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.95s 2026-04-04 00:57:32.647808 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.86s 2026-04-04 00:57:32.647811 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.37s 2026-04-04 00:57:32.647815 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.98s 2026-04-04 00:57:32.647819 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2026-04-04 00:57:32.647823 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2026-04-04 00:57:32.647827 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.70s 2026-04-04 00:57:32.647830 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-04-04 00:57:32.647834 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.66s 2026-04-04 00:57:32.647838 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2026-04-04 00:57:32.647842 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.63s 2026-04-04 00:57:32.647846 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2026-04-04 00:57:32.647850 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.55s 2026-04-04 00:57:32.647853 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.54s 2026-04-04 00:57:32.647857 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.53s 2026-04-04 00:57:32.647861 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.53s 2026-04-04 00:57:32.647865 | orchestrator | 2026-04-04 00:57:32 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:32.647869 | orchestrator | 2026-04-04 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:35.674875 | orchestrator | 2026-04-04 00:57:35 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:35.677246 | orchestrator | 2026-04-04 00:57:35 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:35.677324 | orchestrator | 2026-04-04 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:38.712332 | orchestrator | 2026-04-04 00:57:38 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:38.712430 | orchestrator | 2026-04-04 00:57:38 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:38.712447 | orchestrator | 2026-04-04 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:41.737770 | orchestrator | 2026-04-04 00:57:41 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state STARTED 2026-04-04 00:57:41.739559 | orchestrator | 2026-04-04 00:57:41 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:41.741096 | orchestrator | 2026-04-04 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:44.775783 | orchestrator | 2026-04-04 00:57:44.775869 | orchestrator | 2026-04-04 00:57:44.775924 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-04 00:57:44.775934 | orchestrator | 2026-04-04 00:57:44.775957 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-04 00:57:44.775965 | orchestrator | Saturday 04 April 2026 00:54:29 +0000 (0:00:00.093) 0:00:00.093 ******** 2026-04-04 00:57:44.775971 | orchestrator | ok: [localhost] => { 2026-04-04 00:57:44.775980 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-04 00:57:44.775987 | orchestrator | } 2026-04-04 00:57:44.775993 | orchestrator | 2026-04-04 00:57:44.776000 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-04 00:57:44.776006 | orchestrator | Saturday 04 April 2026 00:54:29 +0000 (0:00:00.038) 0:00:00.132 ******** 2026-04-04 00:57:44.776012 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-04 00:57:44.776021 | orchestrator | ...ignoring 2026-04-04 00:57:44.776028 | orchestrator | 2026-04-04 00:57:44.776035 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-04 00:57:44.776041 | orchestrator | Saturday 04 April 2026 00:54:32 +0000 (0:00:02.945) 0:00:03.078 ******** 2026-04-04 00:57:44.776048 | orchestrator | skipping: [localhost] 2026-04-04 00:57:44.776054 | orchestrator | 2026-04-04 00:57:44.776061 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-04 00:57:44.776067 | orchestrator | Saturday 04 April 2026 00:54:32 +0000 (0:00:00.051) 0:00:03.129 ******** 2026-04-04 00:57:44.776073 | orchestrator | ok: [localhost] 2026-04-04 00:57:44.776080 | orchestrator | 2026-04-04 00:57:44.776086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:57:44.776093 | orchestrator | 2026-04-04 00:57:44.776099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:57:44.776104 | orchestrator | Saturday 04 April 2026 00:54:33 +0000 (0:00:00.223) 0:00:03.352 ******** 2026-04-04 00:57:44.776111 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.776117 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.776123 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.776129 | orchestrator | 2026-04-04 00:57:44.776135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:57:44.776142 | orchestrator | Saturday 04 April 2026 00:54:33 +0000 (0:00:00.376) 0:00:03.729 ******** 2026-04-04 00:57:44.776344 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-04 00:57:44.776352 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-04 00:57:44.776359 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-04 00:57:44.776388 | orchestrator | 2026-04-04 00:57:44.776396 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-04 00:57:44.776402 | orchestrator | 2026-04-04 00:57:44.776444 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-04 00:57:44.776452 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:00.575) 0:00:04.304 ******** 2026-04-04 00:57:44.776459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:57:44.776466 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 00:57:44.776472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 00:57:44.776478 | orchestrator | 2026-04-04 00:57:44.776485 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:57:44.776492 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:00.416) 0:00:04.721 ******** 2026-04-04 00:57:44.776498 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:44.776506 | orchestrator | 2026-04-04 00:57:44.776513 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-04 00:57:44.776519 | orchestrator | Saturday 04 April 2026 00:54:35 +0000 (0:00:00.640) 0:00:05.362 ******** 2026-04-04 00:57:44.776551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776583 | orchestrator | 2026-04-04 00:57:44.776682 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-04 00:57:44.776694 | orchestrator | Saturday 04 April 2026 00:54:38 +0000 (0:00:03.481) 0:00:08.843 ******** 2026-04-04 00:57:44.776707 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.776716 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.776722 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.776726 | orchestrator | 2026-04-04 00:57:44.776730 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-04 00:57:44.776733 | orchestrator | Saturday 04 April 2026 00:54:39 +0000 (0:00:00.635) 0:00:09.479 ******** 2026-04-04 00:57:44.776737 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.776742 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.776745 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.776847 | orchestrator | 2026-04-04 00:57:44.776853 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-04 00:57:44.776857 | orchestrator | Saturday 04 April 2026 00:54:40 +0000 (0:00:01.734) 0:00:11.213 ******** 2026-04-04 00:57:44.776862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.776914 | orchestrator | 2026-04-04 00:57:44.776921 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-04 00:57:44.776927 | orchestrator | Saturday 04 April 2026 00:54:45 +0000 (0:00:04.097) 0:00:15.310 ******** 2026-04-04 00:57:44.776933 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.776939 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.776945 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.776951 | orchestrator | 2026-04-04 00:57:44.776958 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-04 00:57:44.776964 | orchestrator | Saturday 04 April 2026 00:54:45 +0000 (0:00:00.954) 0:00:16.265 ******** 2026-04-04 00:57:44.776969 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.776976 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:44.776982 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:44.776987 | orchestrator | 2026-04-04 00:57:44.776990 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:57:44.776994 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:03.491) 0:00:19.757 ******** 2026-04-04 00:57:44.776998 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:44.777002 | orchestrator | 2026-04-04 00:57:44.777006 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-04 00:57:44.777010 | orchestrator | Saturday 04 April 2026 00:54:49 +0000 (0:00:00.463) 0:00:20.221 ******** 2026-04-04 00:57:44.777023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777034 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777046 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777073 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777077 | orchestrator | 2026-04-04 00:57:44.777081 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-04 00:57:44.777085 | orchestrator | Saturday 04 April 2026 00:54:51 +0000 (0:00:02.056) 0:00:22.277 ******** 2026-04-04 00:57:44.777089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777093 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777111 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777119 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777123 | orchestrator | 2026-04-04 00:57:44.777127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-04 00:57:44.777130 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:02.148) 0:00:24.425 ******** 2026-04-04 00:57:44.777137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777152 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777168 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777176 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777189 | orchestrator | 2026-04-04 00:57:44.777193 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-04 00:57:44.777196 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:02.516) 0:00:26.942 ******** 2026-04-04 00:57:44.777204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2026-04-04 00:57:44 | INFO  | Task d966872d-42de-4364-a4a2-ec5890cc32dd is in state SUCCESS 2026-04-04 00:57:44.777212 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.777217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.777232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:57:44.777236 | orchestrator | 2026-04-04 00:57:44.777240 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-04 00:57:44.777244 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:03.065) 0:00:30.008 ******** 2026-04-04 00:57:44.777248 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:57:44.777252 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:44.777256 | orchestrator | } 2026-04-04 00:57:44.777260 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:57:44.777263 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:44.777267 | orchestrator | } 2026-04-04 00:57:44.777271 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:57:44.777274 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:57:44.777278 | orchestrator | } 2026-04-04 00:57:44.777282 | orchestrator | 2026-04-04 00:57:44.777286 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:57:44.777289 | orchestrator | Saturday 04 April 2026 00:55:00 +0000 (0:00:00.328) 0:00:30.337 ******** 2026-04-04 00:57:44.777294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777303 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777324 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777341 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777347 | orchestrator | 2026-04-04 00:57:44.777354 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-04 00:57:44.777359 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:02.993) 0:00:33.331 ******** 2026-04-04 00:57:44.777366 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777370 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777374 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777378 | orchestrator | 2026-04-04 00:57:44.777382 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-04 00:57:44.777386 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.496) 0:00:33.828 ******** 2026-04-04 00:57:44.777389 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777393 | orchestrator | 2026-04-04 00:57:44.777397 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-04 00:57:44.777401 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.128) 0:00:33.956 ******** 2026-04-04 00:57:44.777404 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777408 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777412 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777416 | orchestrator | 2026-04-04 00:57:44.777423 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-04 00:57:44.777427 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.302) 0:00:34.259 ******** 2026-04-04 00:57:44.777430 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777437 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777441 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777444 | orchestrator | 2026-04-04 00:57:44.777448 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-04 00:57:44.777452 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.318) 0:00:34.577 ******** 2026-04-04 00:57:44.777456 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777459 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777463 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777467 | orchestrator | 2026-04-04 00:57:44.777471 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-04 00:57:44.777474 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.449) 0:00:35.027 ******** 2026-04-04 00:57:44.777478 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777482 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777485 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777489 | orchestrator | 2026-04-04 00:57:44.777493 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-04 00:57:44.777497 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.294) 0:00:35.321 ******** 2026-04-04 00:57:44.777501 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777504 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777508 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777512 | orchestrator | 2026-04-04 00:57:44.777515 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-04 00:57:44.777519 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.303) 0:00:35.625 ******** 2026-04-04 00:57:44.777523 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777527 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777530 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777534 | orchestrator | 2026-04-04 00:57:44.777538 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-04 00:57:44.777542 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.311) 0:00:35.937 ******** 2026-04-04 00:57:44.777549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:57:44.777553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:57:44.777556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:57:44.777560 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-04 00:57:44.777564 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-04 00:57:44.777567 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-04 00:57:44.777571 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777577 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-04 00:57:44.777589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-04 00:57:44.777595 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-04 00:57:44.777601 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777629 | orchestrator | 2026-04-04 00:57:44.777634 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-04 00:57:44.777638 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.353) 0:00:36.290 ******** 2026-04-04 00:57:44.777642 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777646 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777650 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777653 | orchestrator | 2026-04-04 00:57:44.777657 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-04 00:57:44.777661 | orchestrator | Saturday 04 April 2026 00:55:06 +0000 (0:00:00.457) 0:00:36.748 ******** 2026-04-04 00:57:44.777664 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777668 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777672 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777676 | orchestrator | 2026-04-04 00:57:44.777679 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-04 00:57:44.777683 | orchestrator | Saturday 04 April 2026 00:55:06 +0000 (0:00:00.250) 0:00:36.998 ******** 2026-04-04 00:57:44.777746 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777757 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777763 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777767 | orchestrator | 2026-04-04 00:57:44.777771 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-04 00:57:44.777775 | orchestrator | Saturday 04 April 2026 00:55:06 +0000 (0:00:00.252) 0:00:37.251 ******** 2026-04-04 00:57:44.777779 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777783 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777787 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777790 | orchestrator | 2026-04-04 00:57:44.777794 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-04 00:57:44.777798 | orchestrator | Saturday 04 April 2026 00:55:07 +0000 (0:00:00.256) 0:00:37.507 ******** 2026-04-04 00:57:44.777802 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777806 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777813 | orchestrator | 2026-04-04 00:57:44.777817 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-04 00:57:44.777821 | orchestrator | Saturday 04 April 2026 00:55:07 +0000 (0:00:00.371) 0:00:37.879 ******** 2026-04-04 00:57:44.777825 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777828 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777832 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777836 | orchestrator | 2026-04-04 00:57:44.777840 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-04 00:57:44.777843 | orchestrator | Saturday 04 April 2026 00:55:07 +0000 (0:00:00.265) 0:00:38.144 ******** 2026-04-04 00:57:44.777847 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777856 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777865 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777868 | orchestrator | 2026-04-04 00:57:44.777872 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-04 00:57:44.777880 | orchestrator | Saturday 04 April 2026 00:55:08 +0000 (0:00:00.277) 0:00:38.422 ******** 2026-04-04 00:57:44.777884 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777887 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777891 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777895 | orchestrator | 2026-04-04 00:57:44.777899 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-04 00:57:44.777902 | orchestrator | Saturday 04 April 2026 00:55:08 +0000 (0:00:00.279) 0:00:38.701 ******** 2026-04-04 00:57:44.777907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777912 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777926 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.777940 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777946 | orchestrator | 2026-04-04 00:57:44.777952 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-04 00:57:44.777958 | orchestrator | Saturday 04 April 2026 00:55:10 +0000 (0:00:02.155) 0:00:40.857 ******** 2026-04-04 00:57:44.777964 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.777970 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.777975 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.777981 | orchestrator | 2026-04-04 00:57:44.777986 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-04 00:57:44.777992 | orchestrator | Saturday 04 April 2026 00:55:10 +0000 (0:00:00.342) 0:00:41.199 ******** 2026-04-04 00:57:44.778003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.778072 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.778094 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:57:44.778111 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778117 | orchestrator | 2026-04-04 00:57:44.778123 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-04 00:57:44.778130 | orchestrator | Saturday 04 April 2026 00:55:12 +0000 (0:00:02.057) 0:00:43.257 ******** 2026-04-04 00:57:44.778136 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778141 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778153 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778159 | orchestrator | 2026-04-04 00:57:44.778164 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-04 00:57:44.778174 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.322) 0:00:43.580 ******** 2026-04-04 00:57:44.778181 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778187 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778193 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778198 | orchestrator | 2026-04-04 00:57:44.778204 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-04 00:57:44.778210 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.482) 0:00:44.062 ******** 2026-04-04 00:57:44.778215 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778221 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778228 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778234 | orchestrator | 2026-04-04 00:57:44.778240 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-04 00:57:44.778246 | orchestrator | Saturday 04 April 2026 00:55:14 +0000 (0:00:00.291) 0:00:44.353 ******** 2026-04-04 00:57:44.778253 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778258 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778262 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778266 | orchestrator | 2026-04-04 00:57:44.778270 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-04 00:57:44.778274 | orchestrator | Saturday 04 April 2026 00:55:14 +0000 (0:00:00.499) 0:00:44.853 ******** 2026-04-04 00:57:44.778277 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778281 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778287 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778293 | orchestrator | 2026-04-04 00:57:44.778298 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-04 00:57:44.778305 | orchestrator | Saturday 04 April 2026 00:55:15 +0000 (0:00:00.445) 0:00:45.298 ******** 2026-04-04 00:57:44.778311 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.778316 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:44.778320 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:44.778324 | orchestrator | 2026-04-04 00:57:44.778328 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-04 00:57:44.778332 | orchestrator | Saturday 04 April 2026 00:55:16 +0000 (0:00:01.085) 0:00:46.383 ******** 2026-04-04 00:57:44.778335 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778340 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.778344 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.778348 | orchestrator | 2026-04-04 00:57:44.778352 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-04 00:57:44.778355 | orchestrator | Saturday 04 April 2026 00:55:16 +0000 (0:00:00.297) 0:00:46.681 ******** 2026-04-04 00:57:44.778364 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778368 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.778371 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.778375 | orchestrator | 2026-04-04 00:57:44.778379 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-04 00:57:44.778383 | orchestrator | Saturday 04 April 2026 00:55:16 +0000 (0:00:00.336) 0:00:47.018 ******** 2026-04-04 00:57:44.778389 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-04 00:57:44.778394 | orchestrator | ...ignoring 2026-04-04 00:57:44.778399 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-04 00:57:44.778403 | orchestrator | ...ignoring 2026-04-04 00:57:44.778408 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-04 00:57:44.778413 | orchestrator | ...ignoring 2026-04-04 00:57:44.778417 | orchestrator | 2026-04-04 00:57:44.778422 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-04 00:57:44.778426 | orchestrator | Saturday 04 April 2026 00:55:27 +0000 (0:00:11.118) 0:00:58.137 ******** 2026-04-04 00:57:44.778431 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778435 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.778440 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.778445 | orchestrator | 2026-04-04 00:57:44.778449 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-04 00:57:44.778454 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:00.322) 0:00:58.459 ******** 2026-04-04 00:57:44.778458 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778463 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778469 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778475 | orchestrator | 2026-04-04 00:57:44.778480 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-04 00:57:44.778486 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:00.290) 0:00:58.749 ******** 2026-04-04 00:57:44.778491 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778497 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778503 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778508 | orchestrator | 2026-04-04 00:57:44.778514 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-04 00:57:44.778519 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:00.305) 0:00:59.055 ******** 2026-04-04 00:57:44.778525 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778532 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778538 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778543 | orchestrator | 2026-04-04 00:57:44.778550 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-04 00:57:44.778556 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.446) 0:00:59.502 ******** 2026-04-04 00:57:44.778561 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778567 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.778573 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.778579 | orchestrator | 2026-04-04 00:57:44.778585 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-04 00:57:44.778598 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.305) 0:00:59.807 ******** 2026-04-04 00:57:44.778741 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778760 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778770 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778774 | orchestrator | 2026-04-04 00:57:44.778779 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:57:44.778783 | orchestrator | Saturday 04 April 2026 00:55:29 +0000 (0:00:00.315) 0:01:00.123 ******** 2026-04-04 00:57:44.778792 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778796 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778800 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-04 00:57:44.778804 | orchestrator | 2026-04-04 00:57:44.778808 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-04 00:57:44.778812 | orchestrator | Saturday 04 April 2026 00:55:30 +0000 (0:00:00.352) 0:01:00.476 ******** 2026-04-04 00:57:44.778815 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.778819 | orchestrator | 2026-04-04 00:57:44.778823 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-04 00:57:44.778827 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:10.606) 0:01:11.083 ******** 2026-04-04 00:57:44.778831 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778835 | orchestrator | 2026-04-04 00:57:44.778838 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:57:44.778842 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.108) 0:01:11.192 ******** 2026-04-04 00:57:44.778846 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778850 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778854 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778858 | orchestrator | 2026-04-04 00:57:44.778861 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-04 00:57:44.778865 | orchestrator | Saturday 04 April 2026 00:55:41 +0000 (0:00:00.729) 0:01:11.922 ******** 2026-04-04 00:57:44.778869 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.778873 | orchestrator | 2026-04-04 00:57:44.778876 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-04 00:57:44.778880 | orchestrator | Saturday 04 April 2026 00:55:49 +0000 (0:00:07.481) 0:01:19.403 ******** 2026-04-04 00:57:44.778884 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778888 | orchestrator | 2026-04-04 00:57:44.778892 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-04 00:57:44.778896 | orchestrator | Saturday 04 April 2026 00:55:50 +0000 (0:00:01.602) 0:01:21.006 ******** 2026-04-04 00:57:44.778900 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.778903 | orchestrator | 2026-04-04 00:57:44.778907 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-04 00:57:44.778911 | orchestrator | Saturday 04 April 2026 00:55:52 +0000 (0:00:02.207) 0:01:23.213 ******** 2026-04-04 00:57:44.778915 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.778919 | orchestrator | 2026-04-04 00:57:44.778923 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-04 00:57:44.778926 | orchestrator | Saturday 04 April 2026 00:55:53 +0000 (0:00:00.431) 0:01:23.644 ******** 2026-04-04 00:57:44.778930 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778934 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.778938 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.778942 | orchestrator | 2026-04-04 00:57:44.778946 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-04 00:57:44.778949 | orchestrator | Saturday 04 April 2026 00:55:53 +0000 (0:00:00.343) 0:01:23.988 ******** 2026-04-04 00:57:44.778953 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.778957 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:44.778961 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:44.778964 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-04 00:57:44.778968 | orchestrator | 2026-04-04 00:57:44.778972 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-04 00:57:44.778976 | orchestrator | skipping: no hosts matched 2026-04-04 00:57:44.778980 | orchestrator | 2026-04-04 00:57:44.778983 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 00:57:44.778987 | orchestrator | 2026-04-04 00:57:44.778991 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:57:44.778999 | orchestrator | Saturday 04 April 2026 00:55:54 +0000 (0:00:00.320) 0:01:24.308 ******** 2026-04-04 00:57:44.779002 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:44.779006 | orchestrator | 2026-04-04 00:57:44.779010 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:57:44.779014 | orchestrator | Saturday 04 April 2026 00:56:10 +0000 (0:00:16.380) 0:01:40.689 ******** 2026-04-04 00:57:44.779018 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.779022 | orchestrator | 2026-04-04 00:57:44.779025 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:57:44.779029 | orchestrator | Saturday 04 April 2026 00:56:26 +0000 (0:00:15.652) 0:01:56.341 ******** 2026-04-04 00:57:44.779033 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.779037 | orchestrator | 2026-04-04 00:57:44.779040 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 00:57:44.779044 | orchestrator | 2026-04-04 00:57:44.779048 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:57:44.779052 | orchestrator | Saturday 04 April 2026 00:56:28 +0000 (0:00:02.380) 0:01:58.722 ******** 2026-04-04 00:57:44.779056 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:44.779059 | orchestrator | 2026-04-04 00:57:44.779063 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:57:44.779067 | orchestrator | Saturday 04 April 2026 00:56:45 +0000 (0:00:16.990) 0:02:15.712 ******** 2026-04-04 00:57:44.779071 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.779075 | orchestrator | 2026-04-04 00:57:44.779078 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:57:44.779082 | orchestrator | Saturday 04 April 2026 00:57:02 +0000 (0:00:16.602) 0:02:32.315 ******** 2026-04-04 00:57:44.779086 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.779090 | orchestrator | 2026-04-04 00:57:44.779100 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-04 00:57:44.779104 | orchestrator | 2026-04-04 00:57:44.779108 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:57:44.779115 | orchestrator | Saturday 04 April 2026 00:57:04 +0000 (0:00:02.692) 0:02:35.007 ******** 2026-04-04 00:57:44.779119 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.779123 | orchestrator | 2026-04-04 00:57:44.779126 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:57:44.779130 | orchestrator | Saturday 04 April 2026 00:57:15 +0000 (0:00:10.884) 0:02:45.892 ******** 2026-04-04 00:57:44.779134 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.779138 | orchestrator | 2026-04-04 00:57:44.779141 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:57:44.779145 | orchestrator | Saturday 04 April 2026 00:57:20 +0000 (0:00:04.630) 0:02:50.522 ******** 2026-04-04 00:57:44.779149 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.779153 | orchestrator | 2026-04-04 00:57:44.779156 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-04 00:57:44.779160 | orchestrator | 2026-04-04 00:57:44.779164 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-04 00:57:44.779168 | orchestrator | Saturday 04 April 2026 00:57:22 +0000 (0:00:02.643) 0:02:53.165 ******** 2026-04-04 00:57:44.779171 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:44.779175 | orchestrator | 2026-04-04 00:57:44.779179 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-04 00:57:44.779183 | orchestrator | Saturday 04 April 2026 00:57:23 +0000 (0:00:00.503) 0:02:53.669 ******** 2026-04-04 00:57:44.779186 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779190 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779194 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.779198 | orchestrator | 2026-04-04 00:57:44.779201 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-04 00:57:44.779208 | orchestrator | Saturday 04 April 2026 00:57:25 +0000 (0:00:02.596) 0:02:56.266 ******** 2026-04-04 00:57:44.779212 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779216 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779219 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.779223 | orchestrator | 2026-04-04 00:57:44.779227 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-04 00:57:44.779231 | orchestrator | Saturday 04 April 2026 00:57:28 +0000 (0:00:02.288) 0:02:58.555 ******** 2026-04-04 00:57:44.779234 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779238 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779242 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.779246 | orchestrator | 2026-04-04 00:57:44.779249 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-04 00:57:44.779253 | orchestrator | Saturday 04 April 2026 00:57:30 +0000 (0:00:02.454) 0:03:01.010 ******** 2026-04-04 00:57:44.779257 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779261 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779264 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:44.779268 | orchestrator | 2026-04-04 00:57:44.779272 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-04 00:57:44.779276 | orchestrator | Saturday 04 April 2026 00:57:33 +0000 (0:00:02.473) 0:03:03.483 ******** 2026-04-04 00:57:44.779279 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.779283 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.779287 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.779291 | orchestrator | 2026-04-04 00:57:44.779294 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-04 00:57:44.779298 | orchestrator | Saturday 04 April 2026 00:57:37 +0000 (0:00:04.016) 0:03:07.500 ******** 2026-04-04 00:57:44.779302 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.779306 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779309 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779313 | orchestrator | 2026-04-04 00:57:44.779317 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-04 00:57:44.779321 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:01.723) 0:03:09.223 ******** 2026-04-04 00:57:44.779324 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.779328 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779332 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779335 | orchestrator | 2026-04-04 00:57:44.779339 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-04 00:57:44.779343 | orchestrator | Saturday 04 April 2026 00:57:39 +0000 (0:00:00.454) 0:03:09.678 ******** 2026-04-04 00:57:44.779347 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:44.779350 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:44.779354 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:44.779358 | orchestrator | 2026-04-04 00:57:44.779362 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-04 00:57:44.779365 | orchestrator | Saturday 04 April 2026 00:57:42 +0000 (0:00:02.845) 0:03:12.523 ******** 2026-04-04 00:57:44.779369 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:44.779373 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:44.779377 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:44.779380 | orchestrator | 2026-04-04 00:57:44.779384 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:44.779388 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-04 00:57:44.779393 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-04-04 00:57:44.779398 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-04 00:57:44.779409 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-04 00:57:44.779413 | orchestrator | 2026-04-04 00:57:44.779416 | orchestrator | 2026-04-04 00:57:44.779423 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:44.779427 | orchestrator | Saturday 04 April 2026 00:57:42 +0000 (0:00:00.183) 0:03:12.707 ******** 2026-04-04 00:57:44.779431 | orchestrator | =============================================================================== 2026-04-04 00:57:44.779434 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.37s 2026-04-04 00:57:44.779438 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.26s 2026-04-04 00:57:44.779442 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.12s 2026-04-04 00:57:44.779446 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.88s 2026-04-04 00:57:44.779449 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2026-04-04 00:57:44.779453 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.48s 2026-04-04 00:57:44.779457 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.07s 2026-04-04 00:57:44.779461 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2026-04-04 00:57:44.779464 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.10s 2026-04-04 00:57:44.779468 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.02s 2026-04-04 00:57:44.779472 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.49s 2026-04-04 00:57:44.779476 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.48s 2026-04-04 00:57:44.779479 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.07s 2026-04-04 00:57:44.779483 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.99s 2026-04-04 00:57:44.779487 | orchestrator | Check MariaDB service --------------------------------------------------- 2.95s 2026-04-04 00:57:44.779491 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.85s 2026-04-04 00:57:44.779494 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.64s 2026-04-04 00:57:44.779498 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.60s 2026-04-04 00:57:44.779502 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.52s 2026-04-04 00:57:44.779505 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.47s 2026-04-04 00:57:44.779509 | orchestrator | 2026-04-04 00:57:44 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:44.779513 | orchestrator | 2026-04-04 00:57:44 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:44.779517 | orchestrator | 2026-04-04 00:57:44 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:44.779521 | orchestrator | 2026-04-04 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:47.807778 | orchestrator | 2026-04-04 00:57:47 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:47.807932 | orchestrator | 2026-04-04 00:57:47 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:47.809138 | orchestrator | 2026-04-04 00:57:47 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:47.809179 | orchestrator | 2026-04-04 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:50.845348 | orchestrator | 2026-04-04 00:57:50 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:50.846780 | orchestrator | 2026-04-04 00:57:50 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:50.848564 | orchestrator | 2026-04-04 00:57:50 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:50.848691 | orchestrator | 2026-04-04 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:53.886466 | orchestrator | 2026-04-04 00:57:53 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:53.886586 | orchestrator | 2026-04-04 00:57:53 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:53.887170 | orchestrator | 2026-04-04 00:57:53 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:53.887200 | orchestrator | 2026-04-04 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:56.918242 | orchestrator | 2026-04-04 00:57:56 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:56.919115 | orchestrator | 2026-04-04 00:57:56 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:56.923503 | orchestrator | 2026-04-04 00:57:56 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:56.923745 | orchestrator | 2026-04-04 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:59.958303 | orchestrator | 2026-04-04 00:57:59 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:57:59.958563 | orchestrator | 2026-04-04 00:57:59 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:57:59.958635 | orchestrator | 2026-04-04 00:57:59 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:57:59.958651 | orchestrator | 2026-04-04 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:02.991552 | orchestrator | 2026-04-04 00:58:02 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:02.991735 | orchestrator | 2026-04-04 00:58:02 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state STARTED 2026-04-04 00:58:02.991754 | orchestrator | 2026-04-04 00:58:02 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:02.991780 | orchestrator | 2026-04-04 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:06.028227 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:06.028338 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 5f98ac7f-71a2-49b8-a3f1-d0209df403d1 is in state SUCCESS 2026-04-04 00:58:06.029560 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:06.030906 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:06.030954 | orchestrator | 2026-04-04 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:09.056707 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:09.058874 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:09.058921 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:09.058928 | orchestrator | 2026-04-04 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:12.080623 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:12.081577 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:12.082654 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:12.082687 | orchestrator | 2026-04-04 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:15.120940 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:15.124220 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:15.124650 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:15.124932 | orchestrator | 2026-04-04 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:18.169660 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:18.174346 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:18.174412 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:18.174418 | orchestrator | 2026-04-04 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:21.213381 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:21.214277 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:21.216581 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:21.216640 | orchestrator | 2026-04-04 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:24.245788 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:24.247002 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:24.249923 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:24.250008 | orchestrator | 2026-04-04 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:27.282354 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:27.283982 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:27.285445 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:27.285505 | orchestrator | 2026-04-04 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:30.326496 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:30.327201 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:30.329183 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:30.329224 | orchestrator | 2026-04-04 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:33.367166 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:33.369158 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:33.370935 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:33.371046 | orchestrator | 2026-04-04 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:36.397178 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:36.397973 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:36.398069 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:36.398080 | orchestrator | 2026-04-04 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:39.436141 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:39.438702 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:39.440486 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:39.440572 | orchestrator | 2026-04-04 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:42.477340 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:42.480277 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:42.482275 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:42.482321 | orchestrator | 2026-04-04 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:45.526755 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:45.528223 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:45.530316 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:45.530365 | orchestrator | 2026-04-04 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:48.576203 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:48.576878 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:48.578324 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:48.578381 | orchestrator | 2026-04-04 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:51.622086 | orchestrator | 2026-04-04 00:58:51 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:51.622743 | orchestrator | 2026-04-04 00:58:51 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:51.624212 | orchestrator | 2026-04-04 00:58:51 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state STARTED 2026-04-04 00:58:51.624252 | orchestrator | 2026-04-04 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:54.666048 | orchestrator | 2026-04-04 00:58:54 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:54.668008 | orchestrator | 2026-04-04 00:58:54 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:54.671268 | orchestrator | 2026-04-04 00:58:54 | INFO  | Task 16b68ff7-b7f3-480e-830b-fdc4c37925be is in state SUCCESS 2026-04-04 00:58:54.671313 | orchestrator | 2026-04-04 00:58:54.671322 | orchestrator | 2026-04-04 00:58:54.671329 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-04 00:58:54.671337 | orchestrator | 2026-04-04 00:58:54.671343 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-04 00:58:54.671350 | orchestrator | Saturday 04 April 2026 00:57:32 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-04-04 00:58:54.671356 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-04 00:58:54.671363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671369 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:58:54.671380 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671386 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-04 00:58:54.671401 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-04 00:58:54.671413 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:58:54.671419 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-04 00:58:54.671426 | orchestrator | 2026-04-04 00:58:54.671432 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-04 00:58:54.671444 | orchestrator | Saturday 04 April 2026 00:57:37 +0000 (0:00:04.665) 0:00:04.862 ******** 2026-04-04 00:58:54.671451 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-04 00:58:54.671458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671471 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:58:54.671478 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671493 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-04 00:58:54.671554 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-04 00:58:54.671563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:58:54.671568 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-04 00:58:54.671571 | orchestrator | 2026-04-04 00:58:54.671575 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-04 00:58:54.671579 | orchestrator | Saturday 04 April 2026 00:57:41 +0000 (0:00:04.263) 0:00:09.125 ******** 2026-04-04 00:58:54.671584 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:58:54.671588 | orchestrator | 2026-04-04 00:58:54.671592 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-04 00:58:54.671602 | orchestrator | Saturday 04 April 2026 00:57:42 +0000 (0:00:00.915) 0:00:10.041 ******** 2026-04-04 00:58:54.671609 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-04 00:58:54.671614 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671618 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671631 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:58:54.671635 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671639 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-04 00:58:54.671642 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-04 00:58:54.671721 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:58:54.671738 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-04 00:58:54.671745 | orchestrator | 2026-04-04 00:58:54.671752 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-04 00:58:54.671759 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:11.754) 0:00:21.795 ******** 2026-04-04 00:58:54.671770 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-04 00:58:54.671778 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-04 00:58:54.671785 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-04 00:58:54.671801 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-04 00:58:54.671809 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-04 00:58:54.671815 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-04 00:58:54.671822 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-04 00:58:54.671829 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-04 00:58:54.671836 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-04 00:58:54.671843 | orchestrator | 2026-04-04 00:58:54.671850 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-04 00:58:54.671857 | orchestrator | Saturday 04 April 2026 00:57:57 +0000 (0:00:02.809) 0:00:24.605 ******** 2026-04-04 00:58:54.671863 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-04 00:58:54.671869 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671875 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671881 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:58:54.671888 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:58:54.671895 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-04 00:58:54.671902 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-04 00:58:54.671909 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:58:54.671916 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-04 00:58:54.671922 | orchestrator | 2026-04-04 00:58:54.671928 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:58:54.671935 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:58:54.671943 | orchestrator | 2026-04-04 00:58:54.671950 | orchestrator | 2026-04-04 00:58:54.671957 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:58:54.671964 | orchestrator | Saturday 04 April 2026 00:58:03 +0000 (0:00:05.802) 0:00:30.407 ******** 2026-04-04 00:58:54.671970 | orchestrator | =============================================================================== 2026-04-04 00:58:54.671983 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.75s 2026-04-04 00:58:54.671991 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.80s 2026-04-04 00:58:54.671997 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.67s 2026-04-04 00:58:54.672003 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.26s 2026-04-04 00:58:54.672010 | orchestrator | Check if target directories exist --------------------------------------- 2.81s 2026-04-04 00:58:54.672016 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2026-04-04 00:58:54.672023 | orchestrator | 2026-04-04 00:58:54.672029 | orchestrator | 2026-04-04 00:58:54.672035 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-04 00:58:54.672041 | orchestrator | 2026-04-04 00:58:54.672047 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-04 00:58:54.672054 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-04 00:58:54.672060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-04 00:58:54.672072 | orchestrator | 2026-04-04 00:58:54.672079 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-04 00:58:54.672085 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:00.196) 0:00:00.473 ******** 2026-04-04 00:58:54.672091 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-04 00:58:54.672098 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-04 00:58:54.672104 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-04 00:58:54.672110 | orchestrator | 2026-04-04 00:58:54.672117 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-04 00:58:54.672124 | orchestrator | Saturday 04 April 2026 00:58:07 +0000 (0:00:01.344) 0:00:01.818 ******** 2026-04-04 00:58:54.672131 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-04 00:58:54.672138 | orchestrator | 2026-04-04 00:58:54.672145 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-04 00:58:54.672152 | orchestrator | Saturday 04 April 2026 00:58:08 +0000 (0:00:00.876) 0:00:02.694 ******** 2026-04-04 00:58:54.672158 | orchestrator | changed: [testbed-manager] 2026-04-04 00:58:54.672164 | orchestrator | 2026-04-04 00:58:54.672170 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-04 00:58:54.672181 | orchestrator | Saturday 04 April 2026 00:58:09 +0000 (0:00:00.698) 0:00:03.393 ******** 2026-04-04 00:58:54.672188 | orchestrator | changed: [testbed-manager] 2026-04-04 00:58:54.672194 | orchestrator | 2026-04-04 00:58:54.672200 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-04 00:58:54.672207 | orchestrator | Saturday 04 April 2026 00:58:09 +0000 (0:00:00.752) 0:00:04.145 ******** 2026-04-04 00:58:54.672214 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-04 00:58:54.672227 | orchestrator | ok: [testbed-manager] 2026-04-04 00:58:54.672234 | orchestrator | 2026-04-04 00:58:54.672240 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-04 00:58:54.672247 | orchestrator | Saturday 04 April 2026 00:58:45 +0000 (0:00:35.881) 0:00:40.027 ******** 2026-04-04 00:58:54.672254 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-04 00:58:54.672261 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-04 00:58:54.672268 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-04 00:58:54.672274 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-04 00:58:54.672281 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-04 00:58:54.672287 | orchestrator | 2026-04-04 00:58:54.672294 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-04 00:58:54.672305 | orchestrator | Saturday 04 April 2026 00:58:49 +0000 (0:00:03.515) 0:00:43.543 ******** 2026-04-04 00:58:54.672312 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-04 00:58:54.672318 | orchestrator | 2026-04-04 00:58:54.672324 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-04 00:58:54.672331 | orchestrator | Saturday 04 April 2026 00:58:49 +0000 (0:00:00.479) 0:00:44.022 ******** 2026-04-04 00:58:54.672338 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:58:54.672344 | orchestrator | 2026-04-04 00:58:54.672350 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-04 00:58:54.672357 | orchestrator | Saturday 04 April 2026 00:58:49 +0000 (0:00:00.121) 0:00:44.143 ******** 2026-04-04 00:58:54.672363 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:58:54.672370 | orchestrator | 2026-04-04 00:58:54.672379 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-04 00:58:54.672386 | orchestrator | Saturday 04 April 2026 00:58:50 +0000 (0:00:00.290) 0:00:44.434 ******** 2026-04-04 00:58:54.672392 | orchestrator | changed: [testbed-manager] 2026-04-04 00:58:54.672398 | orchestrator | 2026-04-04 00:58:54.672405 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-04 00:58:54.672411 | orchestrator | Saturday 04 April 2026 00:58:51 +0000 (0:00:01.232) 0:00:45.667 ******** 2026-04-04 00:58:54.672417 | orchestrator | changed: [testbed-manager] 2026-04-04 00:58:54.672424 | orchestrator | 2026-04-04 00:58:54.672432 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-04 00:58:54.672439 | orchestrator | Saturday 04 April 2026 00:58:52 +0000 (0:00:00.637) 0:00:46.304 ******** 2026-04-04 00:58:54.672445 | orchestrator | changed: [testbed-manager] 2026-04-04 00:58:54.672452 | orchestrator | 2026-04-04 00:58:54.672458 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-04 00:58:54.672465 | orchestrator | Saturday 04 April 2026 00:58:52 +0000 (0:00:00.506) 0:00:46.811 ******** 2026-04-04 00:58:54.672515 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-04 00:58:54.672522 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-04 00:58:54.672529 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-04 00:58:54.672535 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-04 00:58:54.672542 | orchestrator | 2026-04-04 00:58:54.672548 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:58:54.672555 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:58:54.672561 | orchestrator | 2026-04-04 00:58:54.672568 | orchestrator | 2026-04-04 00:58:54.672574 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:58:54.672581 | orchestrator | Saturday 04 April 2026 00:58:53 +0000 (0:00:01.343) 0:00:48.155 ******** 2026-04-04 00:58:54.672588 | orchestrator | =============================================================================== 2026-04-04 00:58:54.672595 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.88s 2026-04-04 00:58:54.672602 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.52s 2026-04-04 00:58:54.672609 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-04-04 00:58:54.672616 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2026-04-04 00:58:54.672623 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.23s 2026-04-04 00:58:54.672629 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.88s 2026-04-04 00:58:54.672635 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2026-04-04 00:58:54.672642 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.70s 2026-04-04 00:58:54.672648 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.64s 2026-04-04 00:58:54.672660 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.51s 2026-04-04 00:58:54.672666 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-04-04 00:58:54.672672 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2026-04-04 00:58:54.672678 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-04-04 00:58:54.672684 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-04-04 00:58:54.672694 | orchestrator | 2026-04-04 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:57.717862 | orchestrator | 2026-04-04 00:58:57 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:58:57.720326 | orchestrator | 2026-04-04 00:58:57 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:58:57.721332 | orchestrator | 2026-04-04 00:58:57 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:58:57.728197 | orchestrator | 2026-04-04 00:58:57 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:58:57.728247 | orchestrator | 2026-04-04 00:58:57 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:58:57.728256 | orchestrator | 2026-04-04 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:00.773137 | orchestrator | 2026-04-04 00:59:00 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:00.773756 | orchestrator | 2026-04-04 00:59:00 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:00.774810 | orchestrator | 2026-04-04 00:59:00 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:00.775610 | orchestrator | 2026-04-04 00:59:00 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:00.777764 | orchestrator | 2026-04-04 00:59:00 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:00.777804 | orchestrator | 2026-04-04 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:03.818343 | orchestrator | 2026-04-04 00:59:03 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:03.818916 | orchestrator | 2026-04-04 00:59:03 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:03.819795 | orchestrator | 2026-04-04 00:59:03 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:03.821397 | orchestrator | 2026-04-04 00:59:03 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:03.822224 | orchestrator | 2026-04-04 00:59:03 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:03.822828 | orchestrator | 2026-04-04 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:06.864088 | orchestrator | 2026-04-04 00:59:06 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:06.864859 | orchestrator | 2026-04-04 00:59:06 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:06.865422 | orchestrator | 2026-04-04 00:59:06 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:06.866461 | orchestrator | 2026-04-04 00:59:06 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:06.867066 | orchestrator | 2026-04-04 00:59:06 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:06.867238 | orchestrator | 2026-04-04 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:09.901933 | orchestrator | 2026-04-04 00:59:09 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:09.902672 | orchestrator | 2026-04-04 00:59:09 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:09.903444 | orchestrator | 2026-04-04 00:59:09 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:09.904671 | orchestrator | 2026-04-04 00:59:09 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:09.905906 | orchestrator | 2026-04-04 00:59:09 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:09.905939 | orchestrator | 2026-04-04 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:12.948139 | orchestrator | 2026-04-04 00:59:12 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:12.948209 | orchestrator | 2026-04-04 00:59:12 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:12.948955 | orchestrator | 2026-04-04 00:59:12 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:12.950328 | orchestrator | 2026-04-04 00:59:12 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:12.952242 | orchestrator | 2026-04-04 00:59:12 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:12.952303 | orchestrator | 2026-04-04 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:15.983362 | orchestrator | 2026-04-04 00:59:15 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:15.985724 | orchestrator | 2026-04-04 00:59:15 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:15.986635 | orchestrator | 2026-04-04 00:59:15 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:15.987367 | orchestrator | 2026-04-04 00:59:15 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:15.988226 | orchestrator | 2026-04-04 00:59:15 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:15.988255 | orchestrator | 2026-04-04 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:19.037440 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:19.038888 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:19.039804 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:19.041997 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:19.042638 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state STARTED 2026-04-04 00:59:19.042683 | orchestrator | 2026-04-04 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:22.082464 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:22.084686 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:22.088620 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:22.090370 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:22.095397 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 23ff21c5-f278-46e1-b342-b9632177e4b6 is in state SUCCESS 2026-04-04 00:59:22.098652 | orchestrator | 2026-04-04 00:59:22.098722 | orchestrator | 2026-04-04 00:59:22.098730 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:59:22.098738 | orchestrator | 2026-04-04 00:59:22.098744 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:59:22.098751 | orchestrator | Saturday 04 April 2026 00:57:45 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-04 00:59:22.098758 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.098764 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.098771 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.098776 | orchestrator | 2026-04-04 00:59:22.098783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:59:22.098789 | orchestrator | Saturday 04 April 2026 00:57:45 +0000 (0:00:00.262) 0:00:00.562 ******** 2026-04-04 00:59:22.098795 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-04 00:59:22.098802 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-04 00:59:22.098808 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-04 00:59:22.098814 | orchestrator | 2026-04-04 00:59:22.098820 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-04 00:59:22.098826 | orchestrator | 2026-04-04 00:59:22.098832 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:59:22.098838 | orchestrator | Saturday 04 April 2026 00:57:46 +0000 (0:00:00.290) 0:00:00.853 ******** 2026-04-04 00:59:22.098943 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:59:22.098955 | orchestrator | 2026-04-04 00:59:22.098959 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-04 00:59:22.098963 | orchestrator | Saturday 04 April 2026 00:57:46 +0000 (0:00:00.603) 0:00:01.456 ******** 2026-04-04 00:59:22.098985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.099020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.099030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.099038 | orchestrator | 2026-04-04 00:59:22.099042 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-04 00:59:22.099047 | orchestrator | Saturday 04 April 2026 00:57:48 +0000 (0:00:01.618) 0:00:03.075 ******** 2026-04-04 00:59:22.099050 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099054 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099058 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099062 | orchestrator | 2026-04-04 00:59:22.099066 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:59:22.099072 | orchestrator | Saturday 04 April 2026 00:57:48 +0000 (0:00:00.214) 0:00:03.290 ******** 2026-04-04 00:59:22.099076 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:59:22.099080 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:59:22.099084 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:59:22.099088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:59:22.099092 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:59:22.099096 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:59:22.099100 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:59:22.099103 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:59:22.099273 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:59:22.099281 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:59:22.099287 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:59:22.099292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:59:22.099298 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:59:22.099304 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:59:22.099310 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:59:22.099316 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:59:22.099322 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:59:22.099327 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:59:22.099333 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:59:22.099339 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:59:22.099345 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:59:22.099350 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:59:22.099361 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:59:22.099366 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:59:22.099380 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-04 00:59:22.099388 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-04 00:59:22.099394 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-04 00:59:22.099400 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-04 00:59:22.099406 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-04 00:59:22.099411 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-04 00:59:22.099417 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-04 00:59:22.099422 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-04 00:59:22.099428 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-04 00:59:22.099435 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-04 00:59:22.099465 | orchestrator | 2026-04-04 00:59:22.099471 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099477 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.654) 0:00:03.944 ******** 2026-04-04 00:59:22.099483 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099489 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099494 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099500 | orchestrator | 2026-04-04 00:59:22.099512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.099518 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.249) 0:00:04.193 ******** 2026-04-04 00:59:22.099524 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099532 | orchestrator | 2026-04-04 00:59:22.099538 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.099544 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.094) 0:00:04.288 ******** 2026-04-04 00:59:22.099550 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099555 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.099561 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.099568 | orchestrator | 2026-04-04 00:59:22.099572 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099576 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.234) 0:00:04.523 ******** 2026-04-04 00:59:22.099580 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099584 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099587 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099591 | orchestrator | 2026-04-04 00:59:22.099595 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.099600 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.249) 0:00:04.772 ******** 2026-04-04 00:59:22.099606 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099612 | orchestrator | 2026-04-04 00:59:22.099618 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.099623 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.102) 0:00:04.874 ******** 2026-04-04 00:59:22.099636 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099643 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.099649 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.099655 | orchestrator | 2026-04-04 00:59:22.099661 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099668 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.325) 0:00:05.200 ******** 2026-04-04 00:59:22.099672 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099677 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099684 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099689 | orchestrator | 2026-04-04 00:59:22.099700 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.099706 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.248) 0:00:05.449 ******** 2026-04-04 00:59:22.099712 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099718 | orchestrator | 2026-04-04 00:59:22.099724 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.099730 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.119) 0:00:05.569 ******** 2026-04-04 00:59:22.099736 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099741 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.099746 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.099752 | orchestrator | 2026-04-04 00:59:22.099763 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099769 | orchestrator | Saturday 04 April 2026 00:57:51 +0000 (0:00:00.246) 0:00:05.816 ******** 2026-04-04 00:59:22.099774 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099780 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099786 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099792 | orchestrator | 2026-04-04 00:59:22.099798 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.099804 | orchestrator | Saturday 04 April 2026 00:57:51 +0000 (0:00:00.262) 0:00:06.078 ******** 2026-04-04 00:59:22.099811 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099817 | orchestrator | 2026-04-04 00:59:22.099822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.099828 | orchestrator | Saturday 04 April 2026 00:57:51 +0000 (0:00:00.109) 0:00:06.188 ******** 2026-04-04 00:59:22.099835 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099841 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.099847 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.099853 | orchestrator | 2026-04-04 00:59:22.099859 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099867 | orchestrator | Saturday 04 April 2026 00:57:51 +0000 (0:00:00.351) 0:00:06.539 ******** 2026-04-04 00:59:22.099871 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099876 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099882 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099889 | orchestrator | 2026-04-04 00:59:22.099898 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.099905 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.270) 0:00:06.809 ******** 2026-04-04 00:59:22.099911 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099917 | orchestrator | 2026-04-04 00:59:22.099923 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.099929 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.101) 0:00:06.911 ******** 2026-04-04 00:59:22.099936 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.099943 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.099950 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.099956 | orchestrator | 2026-04-04 00:59:22.099962 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.099970 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.241) 0:00:07.152 ******** 2026-04-04 00:59:22.099981 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.099985 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.099990 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.099994 | orchestrator | 2026-04-04 00:59:22.099999 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.100003 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.423) 0:00:07.576 ******** 2026-04-04 00:59:22.100008 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100012 | orchestrator | 2026-04-04 00:59:22.100017 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.100021 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.102) 0:00:07.678 ******** 2026-04-04 00:59:22.100026 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100030 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100041 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100046 | orchestrator | 2026-04-04 00:59:22.100050 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.100055 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:00.266) 0:00:07.945 ******** 2026-04-04 00:59:22.100059 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.100064 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.100068 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.100073 | orchestrator | 2026-04-04 00:59:22.100077 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.100081 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:00.259) 0:00:08.204 ******** 2026-04-04 00:59:22.100085 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100089 | orchestrator | 2026-04-04 00:59:22.100092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.100096 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:00.119) 0:00:08.324 ******** 2026-04-04 00:59:22.100100 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100103 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100107 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100111 | orchestrator | 2026-04-04 00:59:22.100114 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.100118 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:00.229) 0:00:08.553 ******** 2026-04-04 00:59:22.100122 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.100126 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.100130 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.100133 | orchestrator | 2026-04-04 00:59:22.100137 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.100141 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.345) 0:00:08.899 ******** 2026-04-04 00:59:22.100144 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100148 | orchestrator | 2026-04-04 00:59:22.100152 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.100156 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.108) 0:00:09.008 ******** 2026-04-04 00:59:22.100159 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100163 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100167 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100171 | orchestrator | 2026-04-04 00:59:22.100174 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.100178 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.318) 0:00:09.326 ******** 2026-04-04 00:59:22.100182 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.100185 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.100189 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.100193 | orchestrator | 2026-04-04 00:59:22.100196 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.100200 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.318) 0:00:09.644 ******** 2026-04-04 00:59:22.100204 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100211 | orchestrator | 2026-04-04 00:59:22.100219 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.100223 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.143) 0:00:09.787 ******** 2026-04-04 00:59:22.100227 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100230 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100238 | orchestrator | 2026-04-04 00:59:22.100242 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:59:22.100245 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.232) 0:00:10.020 ******** 2026-04-04 00:59:22.100249 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:59:22.100253 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:59:22.100256 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:59:22.100260 | orchestrator | 2026-04-04 00:59:22.100264 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:59:22.100268 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.371) 0:00:10.391 ******** 2026-04-04 00:59:22.100271 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100275 | orchestrator | 2026-04-04 00:59:22.100279 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:59:22.100282 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.144) 0:00:10.536 ******** 2026-04-04 00:59:22.100286 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100290 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100293 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100297 | orchestrator | 2026-04-04 00:59:22.100301 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-04 00:59:22.100305 | orchestrator | Saturday 04 April 2026 00:57:56 +0000 (0:00:00.338) 0:00:10.874 ******** 2026-04-04 00:59:22.100308 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:59:22.100312 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:59:22.100316 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:59:22.100320 | orchestrator | 2026-04-04 00:59:22.100323 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-04 00:59:22.100327 | orchestrator | Saturday 04 April 2026 00:57:57 +0000 (0:00:01.544) 0:00:12.418 ******** 2026-04-04 00:59:22.100331 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:59:22.100335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:59:22.100339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:59:22.100342 | orchestrator | 2026-04-04 00:59:22.100346 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-04 00:59:22.100350 | orchestrator | Saturday 04 April 2026 00:57:59 +0000 (0:00:01.894) 0:00:14.313 ******** 2026-04-04 00:59:22.100354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:59:22.100359 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:59:22.100365 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:59:22.100369 | orchestrator | 2026-04-04 00:59:22.100373 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-04 00:59:22.100377 | orchestrator | Saturday 04 April 2026 00:58:02 +0000 (0:00:02.517) 0:00:16.830 ******** 2026-04-04 00:59:22.100381 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:59:22.100384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:59:22.100388 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:59:22.100392 | orchestrator | 2026-04-04 00:59:22.100396 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-04 00:59:22.100403 | orchestrator | Saturday 04 April 2026 00:58:04 +0000 (0:00:01.980) 0:00:18.811 ******** 2026-04-04 00:59:22.100406 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100410 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100414 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100417 | orchestrator | 2026-04-04 00:59:22.100421 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-04 00:59:22.100425 | orchestrator | Saturday 04 April 2026 00:58:04 +0000 (0:00:00.261) 0:00:19.072 ******** 2026-04-04 00:59:22.100429 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100432 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100436 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100463 | orchestrator | 2026-04-04 00:59:22.100469 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:59:22.100475 | orchestrator | Saturday 04 April 2026 00:58:04 +0000 (0:00:00.247) 0:00:19.320 ******** 2026-04-04 00:59:22.100481 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:59:22.100488 | orchestrator | 2026-04-04 00:59:22.100494 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-04 00:59:22.100499 | orchestrator | Saturday 04 April 2026 00:58:05 +0000 (0:00:00.673) 0:00:19.993 ******** 2026-04-04 00:59:22.100512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100555 | orchestrator | 2026-04-04 00:59:22.100559 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-04 00:59:22.100563 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:01.489) 0:00:21.482 ******** 2026-04-04 00:59:22.100570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100574 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100590 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100602 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100606 | orchestrator | 2026-04-04 00:59:22.100609 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-04 00:59:22.100613 | orchestrator | Saturday 04 April 2026 00:58:07 +0000 (0:00:00.713) 0:00:22.196 ******** 2026-04-04 00:59:22.100620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100628 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100640 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100656 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100659 | orchestrator | 2026-04-04 00:59:22.100663 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-04 00:59:22.100667 | orchestrator | Saturday 04 April 2026 00:58:08 +0000 (0:00:00.979) 0:00:23.175 ******** 2026-04-04 00:59:22.100674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:59:22.100718 | orchestrator | 2026-04-04 00:59:22.100724 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-04 00:59:22.100730 | orchestrator | Saturday 04 April 2026 00:58:09 +0000 (0:00:01.342) 0:00:24.518 ******** 2026-04-04 00:59:22.100736 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 00:59:22.100741 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:59:22.100747 | orchestrator | } 2026-04-04 00:59:22.100753 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 00:59:22.100759 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:59:22.100765 | orchestrator | } 2026-04-04 00:59:22.100772 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 00:59:22.100778 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 00:59:22.100784 | orchestrator | } 2026-04-04 00:59:22.100790 | orchestrator | 2026-04-04 00:59:22.100796 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 00:59:22.100802 | orchestrator | Saturday 04 April 2026 00:58:10 +0000 (0:00:00.272) 0:00:24.790 ******** 2026-04-04 00:59:22.100813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100822 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100835 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:59:22.100851 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100855 | orchestrator | 2026-04-04 00:59:22.100859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:59:22.100863 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:00.993) 0:00:25.784 ******** 2026-04-04 00:59:22.100867 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:59:22.100870 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:59:22.100874 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:59:22.100878 | orchestrator | 2026-04-04 00:59:22.100881 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:59:22.100885 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:00.319) 0:00:26.103 ******** 2026-04-04 00:59:22.100889 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:59:22.100893 | orchestrator | 2026-04-04 00:59:22.100899 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-04 00:59:22.100903 | orchestrator | Saturday 04 April 2026 00:58:12 +0000 (0:00:00.766) 0:00:26.870 ******** 2026-04-04 00:59:22.100907 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:59:22.100911 | orchestrator | 2026-04-04 00:59:22.100915 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-04 00:59:22.100919 | orchestrator | Saturday 04 April 2026 00:58:14 +0000 (0:00:02.561) 0:00:29.432 ******** 2026-04-04 00:59:22.100923 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:59:22.100927 | orchestrator | 2026-04-04 00:59:22.100930 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-04 00:59:22.100934 | orchestrator | Saturday 04 April 2026 00:58:17 +0000 (0:00:02.532) 0:00:31.964 ******** 2026-04-04 00:59:22.100938 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:59:22.100942 | orchestrator | 2026-04-04 00:59:22.100946 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:59:22.100949 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:17.249) 0:00:49.214 ******** 2026-04-04 00:59:22.100953 | orchestrator | 2026-04-04 00:59:22.100957 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:59:22.100961 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:00.057) 0:00:49.271 ******** 2026-04-04 00:59:22.100965 | orchestrator | 2026-04-04 00:59:22.100968 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:59:22.100972 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:00.057) 0:00:49.328 ******** 2026-04-04 00:59:22.100976 | orchestrator | 2026-04-04 00:59:22.100980 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-04 00:59:22.100984 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:00.060) 0:00:49.388 ******** 2026-04-04 00:59:22.100987 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:59:22.100991 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:59:22.100995 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:59:22.100999 | orchestrator | 2026-04-04 00:59:22.101003 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:59:22.101007 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-04 00:59:22.101012 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-04 00:59:22.101019 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-04 00:59:22.101023 | orchestrator | 2026-04-04 00:59:22.101027 | orchestrator | 2026-04-04 00:59:22.101031 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:59:22.101038 | orchestrator | Saturday 04 April 2026 00:59:19 +0000 (0:00:44.703) 0:01:34.092 ******** 2026-04-04 00:59:22.101042 | orchestrator | =============================================================================== 2026-04-04 00:59:22.101046 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.71s 2026-04-04 00:59:22.101050 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.25s 2026-04-04 00:59:22.101054 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.56s 2026-04-04 00:59:22.101058 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.53s 2026-04-04 00:59:22.101062 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.52s 2026-04-04 00:59:22.101065 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.98s 2026-04-04 00:59:22.101069 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-04-04 00:59:22.101073 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.62s 2026-04-04 00:59:22.101077 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.54s 2026-04-04 00:59:22.101081 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.49s 2026-04-04 00:59:22.101085 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.34s 2026-04-04 00:59:22.101088 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.99s 2026-04-04 00:59:22.101092 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.98s 2026-04-04 00:59:22.101096 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-04-04 00:59:22.101100 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-04 00:59:22.101104 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-04-04 00:59:22.101107 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-04-04 00:59:22.101122 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-04-04 00:59:22.101127 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2026-04-04 00:59:22.101130 | orchestrator | horizon : Update policy file name --------------------------------------- 0.37s 2026-04-04 00:59:22.101134 | orchestrator | 2026-04-04 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:25.143989 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:25.145956 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:25.147083 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:25.148577 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:25.148620 | orchestrator | 2026-04-04 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:28.181167 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:28.181992 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:28.183369 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:28.184496 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:28.184520 | orchestrator | 2026-04-04 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:31.236317 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:31.237255 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:31.240518 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:31.242714 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:31.242764 | orchestrator | 2026-04-04 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:34.290225 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:34.294622 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:34.297778 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:34.300384 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:34.300629 | orchestrator | 2026-04-04 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:37.344951 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:37.346724 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:37.348610 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state STARTED 2026-04-04 00:59:37.350183 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:37.350234 | orchestrator | 2026-04-04 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:40.393675 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:40.394500 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:40.399674 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 8bc12108-346c-4d1a-935a-33df877224f4 is in state SUCCESS 2026-04-04 00:59:40.399712 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:40.399717 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:40.399721 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:40.399725 | orchestrator | 2026-04-04 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:43.430281 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:43.430850 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:43.431505 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:43.432192 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:43.432929 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:43.432949 | orchestrator | 2026-04-04 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:46.462070 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:46.463061 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:46.464070 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state STARTED 2026-04-04 00:59:46.465154 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:46.467447 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:46.467730 | orchestrator | 2026-04-04 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:49.506960 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:49.509375 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:49.509863 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 6ceaaf9d-e691-4417-a0f1-37995f47a8cc is in state SUCCESS 2026-04-04 00:59:49.510545 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:49.514740 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:49.514788 | orchestrator | 2026-04-04 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:52.551172 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:52.552911 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:52.555076 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:52.557794 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:52.557869 | orchestrator | 2026-04-04 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:55.593539 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:55.594100 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:55.595629 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:55.597199 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:55.597225 | orchestrator | 2026-04-04 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:58.645231 | orchestrator | 2026-04-04 00:59:58 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 00:59:58.645316 | orchestrator | 2026-04-04 00:59:58 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 00:59:58.647054 | orchestrator | 2026-04-04 00:59:58 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 00:59:58.648272 | orchestrator | 2026-04-04 00:59:58 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 00:59:58.648333 | orchestrator | 2026-04-04 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:01.694922 | orchestrator | 2026-04-04 01:00:01 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:01.697239 | orchestrator | 2026-04-04 01:00:01 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state STARTED 2026-04-04 01:00:01.699499 | orchestrator | 2026-04-04 01:00:01 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:01.701263 | orchestrator | 2026-04-04 01:00:01 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 01:00:01.701544 | orchestrator | 2026-04-04 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:04.738852 | orchestrator | 2026-04-04 01:00:04 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:04.740640 | orchestrator | 2026-04-04 01:00:04 | INFO  | Task 915e2c19-623b-4e84-b2b0-edaf0bea3201 is in state SUCCESS 2026-04-04 01:00:04.741700 | orchestrator | 2026-04-04 01:00:04.741744 | orchestrator | 2026-04-04 01:00:04.741753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:00:04.741761 | orchestrator | 2026-04-04 01:00:04.741769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:00:04.741775 | orchestrator | Saturday 04 April 2026 00:58:56 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-04 01:00:04.741783 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.741790 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.741797 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.741804 | orchestrator | 2026-04-04 01:00:04.741812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:00:04.741818 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:00.301) 0:00:00.473 ******** 2026-04-04 01:00:04.741824 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-04 01:00:04.741831 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-04 01:00:04.741836 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-04 01:00:04.741842 | orchestrator | 2026-04-04 01:00:04.741847 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-04 01:00:04.741854 | orchestrator | 2026-04-04 01:00:04.741861 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-04 01:00:04.741867 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:00.461) 0:00:00.934 ******** 2026-04-04 01:00:04.741874 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.741881 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.741888 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.741894 | orchestrator | 2026-04-04 01:00:04.741900 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:04.741996 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742009 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742195 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742203 | orchestrator | 2026-04-04 01:00:04.742209 | orchestrator | 2026-04-04 01:00:04.742216 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:04.742223 | orchestrator | Saturday 04 April 2026 00:59:38 +0000 (0:00:41.027) 0:00:41.961 ******** 2026-04-04 01:00:04.742229 | orchestrator | =============================================================================== 2026-04-04 01:00:04.742236 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 41.03s 2026-04-04 01:00:04.742242 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-04 01:00:04.742269 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-04 01:00:04.742276 | orchestrator | 2026-04-04 01:00:04.742282 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 01:00:04.742289 | orchestrator | 2.16.14 2026-04-04 01:00:04.742295 | orchestrator | 2026-04-04 01:00:04.742309 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-04 01:00:04.742315 | orchestrator | 2026-04-04 01:00:04.742321 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-04 01:00:04.742327 | orchestrator | Saturday 04 April 2026 00:58:58 +0000 (0:00:00.221) 0:00:00.221 ******** 2026-04-04 01:00:04.742333 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742340 | orchestrator | 2026-04-04 01:00:04.742346 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-04 01:00:04.742352 | orchestrator | Saturday 04 April 2026 00:59:00 +0000 (0:00:02.361) 0:00:02.583 ******** 2026-04-04 01:00:04.742359 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742398 | orchestrator | 2026-04-04 01:00:04.742406 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-04 01:00:04.742412 | orchestrator | Saturday 04 April 2026 00:59:01 +0000 (0:00:01.387) 0:00:03.970 ******** 2026-04-04 01:00:04.742418 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742424 | orchestrator | 2026-04-04 01:00:04.742431 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-04 01:00:04.742437 | orchestrator | Saturday 04 April 2026 00:59:02 +0000 (0:00:01.051) 0:00:05.022 ******** 2026-04-04 01:00:04.742444 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742450 | orchestrator | 2026-04-04 01:00:04.742456 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-04 01:00:04.742463 | orchestrator | Saturday 04 April 2026 00:59:04 +0000 (0:00:01.308) 0:00:06.331 ******** 2026-04-04 01:00:04.742468 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742474 | orchestrator | 2026-04-04 01:00:04.742480 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-04 01:00:04.742486 | orchestrator | Saturday 04 April 2026 00:59:05 +0000 (0:00:01.123) 0:00:07.454 ******** 2026-04-04 01:00:04.742492 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742498 | orchestrator | 2026-04-04 01:00:04.742504 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-04 01:00:04.742655 | orchestrator | Saturday 04 April 2026 00:59:06 +0000 (0:00:01.346) 0:00:08.800 ******** 2026-04-04 01:00:04.742662 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742669 | orchestrator | 2026-04-04 01:00:04.742676 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-04 01:00:04.742684 | orchestrator | Saturday 04 April 2026 00:59:08 +0000 (0:00:01.945) 0:00:10.746 ******** 2026-04-04 01:00:04.742692 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742699 | orchestrator | 2026-04-04 01:00:04.742706 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-04 01:00:04.742713 | orchestrator | Saturday 04 April 2026 00:59:09 +0000 (0:00:01.187) 0:00:11.933 ******** 2026-04-04 01:00:04.742720 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:04.742726 | orchestrator | 2026-04-04 01:00:04.742759 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-04 01:00:04.742767 | orchestrator | Saturday 04 April 2026 00:59:21 +0000 (0:00:11.717) 0:00:23.651 ******** 2026-04-04 01:00:04.742775 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:04.742782 | orchestrator | 2026-04-04 01:00:04.742788 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 01:00:04.742795 | orchestrator | 2026-04-04 01:00:04.742801 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 01:00:04.742807 | orchestrator | Saturday 04 April 2026 00:59:21 +0000 (0:00:00.156) 0:00:23.808 ******** 2026-04-04 01:00:04.742814 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.742821 | orchestrator | 2026-04-04 01:00:04.742839 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 01:00:04.742846 | orchestrator | 2026-04-04 01:00:04.742853 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 01:00:04.742860 | orchestrator | Saturday 04 April 2026 00:59:23 +0000 (0:00:01.651) 0:00:25.460 ******** 2026-04-04 01:00:04.742867 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:04.742873 | orchestrator | 2026-04-04 01:00:04.742880 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 01:00:04.742886 | orchestrator | 2026-04-04 01:00:04.742893 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 01:00:04.742899 | orchestrator | Saturday 04 April 2026 00:59:35 +0000 (0:00:11.851) 0:00:37.312 ******** 2026-04-04 01:00:04.742906 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:04.742913 | orchestrator | 2026-04-04 01:00:04.742920 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:04.742927 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 01:00:04.742935 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742942 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742949 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:00:04.742955 | orchestrator | 2026-04-04 01:00:04.742962 | orchestrator | 2026-04-04 01:00:04.742968 | orchestrator | 2026-04-04 01:00:04.742974 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:04.742981 | orchestrator | Saturday 04 April 2026 00:59:46 +0000 (0:00:11.608) 0:00:48.920 ******** 2026-04-04 01:00:04.742987 | orchestrator | =============================================================================== 2026-04-04 01:00:04.742993 | orchestrator | Restart ceph manager service ------------------------------------------- 25.11s 2026-04-04 01:00:04.742999 | orchestrator | Create admin user ------------------------------------------------------ 11.72s 2026-04-04 01:00:04.743011 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.36s 2026-04-04 01:00:04.743018 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.95s 2026-04-04 01:00:04.743024 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.39s 2026-04-04 01:00:04.743030 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.35s 2026-04-04 01:00:04.743037 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.31s 2026-04-04 01:00:04.743043 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-04-04 01:00:04.743050 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2026-04-04 01:00:04.743056 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2026-04-04 01:00:04.743062 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-04 01:00:04.743068 | orchestrator | 2026-04-04 01:00:04.743074 | orchestrator | 2026-04-04 01:00:04.743081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:00:04.743088 | orchestrator | 2026-04-04 01:00:04.743094 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:00:04.743101 | orchestrator | Saturday 04 April 2026 00:57:45 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-04-04 01:00:04.743108 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.743114 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.743121 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.743127 | orchestrator | 2026-04-04 01:00:04.743134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:00:04.743146 | orchestrator | Saturday 04 April 2026 00:57:45 +0000 (0:00:00.247) 0:00:00.510 ******** 2026-04-04 01:00:04.743153 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-04 01:00:04.743159 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-04 01:00:04.743165 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-04 01:00:04.743171 | orchestrator | 2026-04-04 01:00:04.743177 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-04 01:00:04.743183 | orchestrator | 2026-04-04 01:00:04.743189 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.743195 | orchestrator | Saturday 04 April 2026 00:57:46 +0000 (0:00:00.344) 0:00:00.854 ******** 2026-04-04 01:00:04.743201 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:04.743208 | orchestrator | 2026-04-04 01:00:04.743215 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-04 01:00:04.743221 | orchestrator | Saturday 04 April 2026 00:57:46 +0000 (0:00:00.664) 0:00:01.519 ******** 2026-04-04 01:00:04.743259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743388 | orchestrator | 2026-04-04 01:00:04.743395 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-04 01:00:04.743402 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:02.280) 0:00:03.800 ******** 2026-04-04 01:00:04.743408 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.743414 | orchestrator | 2026-04-04 01:00:04.743420 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-04 01:00:04.743427 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.104) 0:00:03.904 ******** 2026-04-04 01:00:04.743433 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.743439 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.743445 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.743451 | orchestrator | 2026-04-04 01:00:04.743457 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-04 01:00:04.743463 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:00.244) 0:00:04.149 ******** 2026-04-04 01:00:04.743520 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:00:04.743527 | orchestrator | 2026-04-04 01:00:04.743534 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.743541 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.784) 0:00:04.933 ******** 2026-04-04 01:00:04.743548 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:04.743554 | orchestrator | 2026-04-04 01:00:04.743561 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-04 01:00:04.743567 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.537) 0:00:05.471 ******** 2026-04-04 01:00:04.743592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.743623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.743686 | orchestrator | 2026-04-04 01:00:04.743692 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-04 01:00:04.743699 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:03.383) 0:00:08.855 ******** 2026-04-04 01:00:04.743705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.743717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.743724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.743771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.743787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.743793 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.743800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.743806 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.743819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.743826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.743832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.743842 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.743849 | orchestrator | 2026-04-04 01:00:04.743856 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-04 01:00:04.743862 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.519) 0:00:09.374 ******** 2026-04-04 01:00:04.743871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.743878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.743884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.743890 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.743902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.743909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.743919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.743989 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.744009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.744029 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744036 | orchestrator | 2026-04-04 01:00:04.744042 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-04 01:00:04.744048 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.850) 0:00:10.224 ******** 2026-04-04 01:00:04.744055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744147 | orchestrator | 2026-04-04 01:00:04.744154 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-04 01:00:04.744161 | orchestrator | Saturday 04 April 2026 00:57:58 +0000 (0:00:03.270) 0:00:13.495 ******** 2026-04-04 01:00:04.744174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.744222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.744260 | orchestrator | 2026-04-04 01:00:04.744266 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-04 01:00:04.744273 | orchestrator | Saturday 04 April 2026 00:58:04 +0000 (0:00:05.616) 0:00:19.112 ******** 2026-04-04 01:00:04.744279 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.744286 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:04.744293 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:04.744299 | orchestrator | 2026-04-04 01:00:04.744307 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-04 01:00:04.744314 | orchestrator | Saturday 04 April 2026 00:58:05 +0000 (0:00:01.343) 0:00:20.455 ******** 2026-04-04 01:00:04.744321 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744328 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744335 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744341 | orchestrator | 2026-04-04 01:00:04.744348 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-04 01:00:04.744354 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:00.669) 0:00:21.125 ******** 2026-04-04 01:00:04.744361 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744461 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744470 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744477 | orchestrator | 2026-04-04 01:00:04.744484 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-04 01:00:04.744491 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:00.238) 0:00:21.363 ******** 2026-04-04 01:00:04.744498 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744505 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744512 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744519 | orchestrator | 2026-04-04 01:00:04.744526 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-04 01:00:04.744533 | orchestrator | Saturday 04 April 2026 00:58:07 +0000 (0:00:00.366) 0:00:21.730 ******** 2026-04-04 01:00:04.744546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.744561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.744576 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.744597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.744616 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.744638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.744649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.744657 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744665 | orchestrator | 2026-04-04 01:00:04.744672 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.744679 | orchestrator | Saturday 04 April 2026 00:58:07 +0000 (0:00:00.585) 0:00:22.315 ******** 2026-04-04 01:00:04.744686 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744693 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744702 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744709 | orchestrator | 2026-04-04 01:00:04.744717 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-04 01:00:04.744725 | orchestrator | Saturday 04 April 2026 00:58:08 +0000 (0:00:00.434) 0:00:22.749 ******** 2026-04-04 01:00:04.744732 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 01:00:04.744741 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 01:00:04.744753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 01:00:04.744761 | orchestrator | 2026-04-04 01:00:04.744770 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-04 01:00:04.744778 | orchestrator | Saturday 04 April 2026 00:58:09 +0000 (0:00:01.660) 0:00:24.410 ******** 2026-04-04 01:00:04.744786 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:00:04.744794 | orchestrator | 2026-04-04 01:00:04.744802 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-04 01:00:04.744810 | orchestrator | Saturday 04 April 2026 00:58:10 +0000 (0:00:01.127) 0:00:25.537 ******** 2026-04-04 01:00:04.744818 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.744826 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.744834 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.744843 | orchestrator | 2026-04-04 01:00:04.744851 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-04 01:00:04.744858 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:00.574) 0:00:26.112 ******** 2026-04-04 01:00:04.744865 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 01:00:04.744872 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 01:00:04.744880 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:00:04.744887 | orchestrator | 2026-04-04 01:00:04.744894 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-04 01:00:04.744901 | orchestrator | Saturday 04 April 2026 00:58:12 +0000 (0:00:01.266) 0:00:27.379 ******** 2026-04-04 01:00:04.744907 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.744914 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.744920 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.744927 | orchestrator | 2026-04-04 01:00:04.744940 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-04 01:00:04.744948 | orchestrator | Saturday 04 April 2026 00:58:13 +0000 (0:00:00.373) 0:00:27.753 ******** 2026-04-04 01:00:04.744955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 01:00:04.744962 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 01:00:04.744969 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 01:00:04.744976 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 01:00:04.744983 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 01:00:04.744991 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 01:00:04.744998 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 01:00:04.745005 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 01:00:04.745012 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 01:00:04.745019 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 01:00:04.745026 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 01:00:04.745033 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 01:00:04.745040 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 01:00:04.745047 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 01:00:04.745054 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 01:00:04.745067 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:00:04.745074 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:00:04.745081 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:00:04.745088 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:00:04.745096 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:00:04.745107 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:00:04.745114 | orchestrator | 2026-04-04 01:00:04.745121 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-04 01:00:04.745128 | orchestrator | Saturday 04 April 2026 00:58:22 +0000 (0:00:09.221) 0:00:36.974 ******** 2026-04-04 01:00:04.745135 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:00:04.745143 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:00:04.745150 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:00:04.745157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:00:04.745164 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:00:04.745171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:00:04.745178 | orchestrator | 2026-04-04 01:00:04.745185 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-04 01:00:04.745192 | orchestrator | Saturday 04 April 2026 00:58:25 +0000 (0:00:02.660) 0:00:39.634 ******** 2026-04-04 01:00:04.745205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.745213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.745229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-04 01:00:04.745237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 01:00:04.745293 | orchestrator | 2026-04-04 01:00:04.745299 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-04 01:00:04.745305 | orchestrator | Saturday 04 April 2026 00:58:27 +0000 (0:00:02.568) 0:00:42.202 ******** 2026-04-04 01:00:04.745310 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:00:04.745316 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:00:04.745323 | orchestrator | } 2026-04-04 01:00:04.745330 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:00:04.745336 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:00:04.745343 | orchestrator | } 2026-04-04 01:00:04.745355 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:00:04.745362 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:00:04.745405 | orchestrator | } 2026-04-04 01:00:04.745412 | orchestrator | 2026-04-04 01:00:04.745418 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:00:04.745424 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.384) 0:00:42.587 ******** 2026-04-04 01:00:04.745431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.745438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.745449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.745461 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.745467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.745477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.745484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.745490 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.745497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-04 01:00:04.745506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 01:00:04.745516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 01:00:04.745523 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.745529 | orchestrator | 2026-04-04 01:00:04.745535 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.745540 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.650) 0:00:43.237 ******** 2026-04-04 01:00:04.745546 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.745552 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.745559 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.745565 | orchestrator | 2026-04-04 01:00:04.745571 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-04 01:00:04.745577 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.244) 0:00:43.482 ******** 2026-04-04 01:00:04.745583 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745590 | orchestrator | 2026-04-04 01:00:04.745596 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-04 01:00:04.745602 | orchestrator | Saturday 04 April 2026 00:58:31 +0000 (0:00:02.284) 0:00:45.766 ******** 2026-04-04 01:00:04.745608 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745615 | orchestrator | 2026-04-04 01:00:04.745622 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-04 01:00:04.745628 | orchestrator | Saturday 04 April 2026 00:58:33 +0000 (0:00:02.328) 0:00:48.094 ******** 2026-04-04 01:00:04.745635 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.745641 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.745647 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.745653 | orchestrator | 2026-04-04 01:00:04.745662 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-04 01:00:04.745669 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:01.040) 0:00:49.135 ******** 2026-04-04 01:00:04.745675 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.745681 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.745687 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.745693 | orchestrator | 2026-04-04 01:00:04.745700 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-04 01:00:04.745706 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:00.261) 0:00:49.397 ******** 2026-04-04 01:00:04.745713 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.745719 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.745725 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.745731 | orchestrator | 2026-04-04 01:00:04.745738 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-04 01:00:04.745744 | orchestrator | Saturday 04 April 2026 00:58:35 +0000 (0:00:00.276) 0:00:49.673 ******** 2026-04-04 01:00:04.745751 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745757 | orchestrator | 2026-04-04 01:00:04.745763 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-04 01:00:04.745769 | orchestrator | Saturday 04 April 2026 00:58:50 +0000 (0:00:15.791) 0:01:05.464 ******** 2026-04-04 01:00:04.745781 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745788 | orchestrator | 2026-04-04 01:00:04.745794 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 01:00:04.745800 | orchestrator | Saturday 04 April 2026 00:59:03 +0000 (0:00:12.118) 0:01:17.583 ******** 2026-04-04 01:00:04.745807 | orchestrator | 2026-04-04 01:00:04.745813 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 01:00:04.745819 | orchestrator | Saturday 04 April 2026 00:59:03 +0000 (0:00:00.069) 0:01:17.653 ******** 2026-04-04 01:00:04.745825 | orchestrator | 2026-04-04 01:00:04.745831 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 01:00:04.745837 | orchestrator | Saturday 04 April 2026 00:59:03 +0000 (0:00:00.065) 0:01:17.719 ******** 2026-04-04 01:00:04.745843 | orchestrator | 2026-04-04 01:00:04.745850 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-04 01:00:04.745856 | orchestrator | Saturday 04 April 2026 00:59:03 +0000 (0:00:00.229) 0:01:17.948 ******** 2026-04-04 01:00:04.745863 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745869 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:04.745876 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:04.745882 | orchestrator | 2026-04-04 01:00:04.745888 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-04 01:00:04.745894 | orchestrator | Saturday 04 April 2026 00:59:12 +0000 (0:00:09.443) 0:01:27.392 ******** 2026-04-04 01:00:04.745901 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745907 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:04.745913 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:04.745920 | orchestrator | 2026-04-04 01:00:04.745930 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-04 01:00:04.745937 | orchestrator | Saturday 04 April 2026 00:59:19 +0000 (0:00:06.460) 0:01:33.852 ******** 2026-04-04 01:00:04.745943 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.745949 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:04.745956 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:04.745962 | orchestrator | 2026-04-04 01:00:04.745969 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.745975 | orchestrator | Saturday 04 April 2026 00:59:30 +0000 (0:00:11.501) 0:01:45.353 ******** 2026-04-04 01:00:04.745982 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:04.745988 | orchestrator | 2026-04-04 01:00:04.745995 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-04 01:00:04.746001 | orchestrator | Saturday 04 April 2026 00:59:31 +0000 (0:00:00.714) 0:01:46.068 ******** 2026-04-04 01:00:04.746064 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.746074 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:04.746080 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:04.746087 | orchestrator | 2026-04-04 01:00:04.746094 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-04 01:00:04.746101 | orchestrator | Saturday 04 April 2026 00:59:32 +0000 (0:00:00.700) 0:01:46.769 ******** 2026-04-04 01:00:04.746108 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:04.746115 | orchestrator | 2026-04-04 01:00:04.746121 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-04 01:00:04.746128 | orchestrator | Saturday 04 April 2026 00:59:33 +0000 (0:00:01.693) 0:01:48.462 ******** 2026-04-04 01:00:04.746135 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-04 01:00:04.746141 | orchestrator | 2026-04-04 01:00:04.746148 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-04-04 01:00:04.746155 | orchestrator | Saturday 04 April 2026 00:59:46 +0000 (0:00:12.777) 0:02:01.239 ******** 2026-04-04 01:00:04.746162 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-04 01:00:04.746169 | orchestrator | 2026-04-04 01:00:04.746175 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-04-04 01:00:04.746187 | orchestrator | Saturday 04 April 2026 00:59:50 +0000 (0:00:03.823) 0:02:05.063 ******** 2026-04-04 01:00:04.746195 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-04 01:00:04.746202 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-04 01:00:04.746209 | orchestrator | 2026-04-04 01:00:04.746216 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-04 01:00:04.746222 | orchestrator | Saturday 04 April 2026 00:59:57 +0000 (0:00:06.892) 0:02:11.956 ******** 2026-04-04 01:00:04.746229 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.746235 | orchestrator | 2026-04-04 01:00:04.746242 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-04 01:00:04.746252 | orchestrator | Saturday 04 April 2026 00:59:57 +0000 (0:00:00.108) 0:02:12.064 ******** 2026-04-04 01:00:04.746259 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.746265 | orchestrator | 2026-04-04 01:00:04.746271 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-04 01:00:04.746277 | orchestrator | Saturday 04 April 2026 00:59:57 +0000 (0:00:00.104) 0:02:12.169 ******** 2026-04-04 01:00:04.746283 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.746289 | orchestrator | 2026-04-04 01:00:04.746295 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-04-04 01:00:04.746301 | orchestrator | Saturday 04 April 2026 00:59:57 +0000 (0:00:00.297) 0:02:12.466 ******** 2026-04-04 01:00:04.746307 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.746313 | orchestrator | 2026-04-04 01:00:04.746320 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-04 01:00:04.746326 | orchestrator | Saturday 04 April 2026 00:59:58 +0000 (0:00:00.310) 0:02:12.777 ******** 2026-04-04 01:00:04.746332 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:04.746338 | orchestrator | 2026-04-04 01:00:04.746344 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 01:00:04.746350 | orchestrator | Saturday 04 April 2026 01:00:01 +0000 (0:00:03.742) 0:02:16.519 ******** 2026-04-04 01:00:04.746357 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:04.746362 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:04.746379 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:04.746386 | orchestrator | 2026-04-04 01:00:04.746392 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:04.746398 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-04 01:00:04.746406 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:00:04.746412 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:00:04.746418 | orchestrator | 2026-04-04 01:00:04.746425 | orchestrator | 2026-04-04 01:00:04.746431 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:04.746438 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.408) 0:02:16.928 ******** 2026-04-04 01:00:04.746445 | orchestrator | =============================================================================== 2026-04-04 01:00:04.746452 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.79s 2026-04-04 01:00:04.746459 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.78s 2026-04-04 01:00:04.746472 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.12s 2026-04-04 01:00:04.746479 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.50s 2026-04-04 01:00:04.746486 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.44s 2026-04-04 01:00:04.746500 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.22s 2026-04-04 01:00:04.746507 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 6.89s 2026-04-04 01:00:04.746514 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 6.46s 2026-04-04 01:00:04.746521 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.62s 2026-04-04 01:00:04.746528 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.82s 2026-04-04 01:00:04.746535 | orchestrator | keystone : Creating default user role ----------------------------------- 3.74s 2026-04-04 01:00:04.746542 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.38s 2026-04-04 01:00:04.746549 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.27s 2026-04-04 01:00:04.746556 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.66s 2026-04-04 01:00:04.746563 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.57s 2026-04-04 01:00:04.746570 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.33s 2026-04-04 01:00:04.746577 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2026-04-04 01:00:04.746584 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.28s 2026-04-04 01:00:04.746591 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.69s 2026-04-04 01:00:04.746598 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.66s 2026-04-04 01:00:04.746606 | orchestrator | 2026-04-04 01:00:04 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:04.746613 | orchestrator | 2026-04-04 01:00:04 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:04.746620 | orchestrator | 2026-04-04 01:00:04 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 01:00:04.746627 | orchestrator | 2026-04-04 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:07.790114 | orchestrator | 2026-04-04 01:00:07 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:07.790178 | orchestrator | 2026-04-04 01:00:07 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:07.791251 | orchestrator | 2026-04-04 01:00:07 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:07.791883 | orchestrator | 2026-04-04 01:00:07 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 01:00:07.791917 | orchestrator | 2026-04-04 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:10.825773 | orchestrator | 2026-04-04 01:00:10 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:10.827627 | orchestrator | 2026-04-04 01:00:10 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:10.828886 | orchestrator | 2026-04-04 01:00:10 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:10.831177 | orchestrator | 2026-04-04 01:00:10 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 01:00:10.831218 | orchestrator | 2026-04-04 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:13.918271 | orchestrator | 2026-04-04 01:00:13 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:13.919193 | orchestrator | 2026-04-04 01:00:13 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:13.919295 | orchestrator | 2026-04-04 01:00:13 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:13.920621 | orchestrator | 2026-04-04 01:00:13 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state STARTED 2026-04-04 01:00:13.920661 | orchestrator | 2026-04-04 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:16.969838 | orchestrator | 2026-04-04 01:00:16 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:17.012867 | orchestrator | 2026-04-04 01:00:16 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:17.012929 | orchestrator | 2026-04-04 01:00:16 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:17.012937 | orchestrator | 2026-04-04 01:00:16 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:17.012944 | orchestrator | 2026-04-04 01:00:16 | INFO  | Task 061b1167-3293-4f46-87da-ed4d8b030e71 is in state SUCCESS 2026-04-04 01:00:17.012951 | orchestrator | 2026-04-04 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:20.019701 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:20.020512 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:20.021504 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:20.022363 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:20.022653 | orchestrator | 2026-04-04 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:23.047854 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:23.048255 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:23.049043 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:23.049889 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:23.050076 | orchestrator | 2026-04-04 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:26.079455 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:26.080808 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:26.081808 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:26.083424 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:26.083482 | orchestrator | 2026-04-04 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:29.126422 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:29.128971 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:29.130808 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:29.132644 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:29.132710 | orchestrator | 2026-04-04 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:32.170271 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:32.171015 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:32.171596 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:32.172752 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:32.172792 | orchestrator | 2026-04-04 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:35.199064 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:35.199850 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:35.200632 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:35.201452 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:35.201491 | orchestrator | 2026-04-04 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:38.239939 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:38.240256 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:38.241119 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:38.241661 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:38.241691 | orchestrator | 2026-04-04 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:41.268292 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:41.269527 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:41.271154 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:41.271989 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:41.272016 | orchestrator | 2026-04-04 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:44.304447 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:44.305860 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:44.307581 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:44.309771 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:44.309841 | orchestrator | 2026-04-04 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:47.340139 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:47.340625 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:47.341652 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:47.343510 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:47.343576 | orchestrator | 2026-04-04 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:50.409551 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:50.409829 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:50.410682 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:50.411432 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:50.411484 | orchestrator | 2026-04-04 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:53.438199 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:53.438916 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:53.439613 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:53.441168 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:53.441215 | orchestrator | 2026-04-04 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:56.478364 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:56.479013 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:56.481165 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:56.481864 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:56.481898 | orchestrator | 2026-04-04 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:59.541704 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:00:59.542262 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:00:59.543188 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:00:59.543533 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:00:59.543672 | orchestrator | 2026-04-04 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:02.566387 | orchestrator | 2026-04-04 01:01:02 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:02.567117 | orchestrator | 2026-04-04 01:01:02 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:02.568109 | orchestrator | 2026-04-04 01:01:02 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:02.568961 | orchestrator | 2026-04-04 01:01:02 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:02.568998 | orchestrator | 2026-04-04 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:05.592232 | orchestrator | 2026-04-04 01:01:05 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:05.592831 | orchestrator | 2026-04-04 01:01:05 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:05.593870 | orchestrator | 2026-04-04 01:01:05 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:05.595149 | orchestrator | 2026-04-04 01:01:05 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:05.595530 | orchestrator | 2026-04-04 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:08.625943 | orchestrator | 2026-04-04 01:01:08 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:08.626072 | orchestrator | 2026-04-04 01:01:08 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:08.626086 | orchestrator | 2026-04-04 01:01:08 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:08.626518 | orchestrator | 2026-04-04 01:01:08 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:08.626578 | orchestrator | 2026-04-04 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:11.652873 | orchestrator | 2026-04-04 01:01:11 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:11.653198 | orchestrator | 2026-04-04 01:01:11 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:11.654128 | orchestrator | 2026-04-04 01:01:11 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:11.656008 | orchestrator | 2026-04-04 01:01:11 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:11.656043 | orchestrator | 2026-04-04 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:14.684656 | orchestrator | 2026-04-04 01:01:14 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:14.685008 | orchestrator | 2026-04-04 01:01:14 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:14.686928 | orchestrator | 2026-04-04 01:01:14 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:14.687564 | orchestrator | 2026-04-04 01:01:14 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:14.687665 | orchestrator | 2026-04-04 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:17.722662 | orchestrator | 2026-04-04 01:01:17 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:17.722996 | orchestrator | 2026-04-04 01:01:17 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:17.726208 | orchestrator | 2026-04-04 01:01:17 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:17.726523 | orchestrator | 2026-04-04 01:01:17 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:17.726601 | orchestrator | 2026-04-04 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:20.761482 | orchestrator | 2026-04-04 01:01:20 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:20.763342 | orchestrator | 2026-04-04 01:01:20 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:20.764880 | orchestrator | 2026-04-04 01:01:20 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:20.766952 | orchestrator | 2026-04-04 01:01:20 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:20.767058 | orchestrator | 2026-04-04 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:23.801107 | orchestrator | 2026-04-04 01:01:23 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:23.801330 | orchestrator | 2026-04-04 01:01:23 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:23.802341 | orchestrator | 2026-04-04 01:01:23 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:23.802929 | orchestrator | 2026-04-04 01:01:23 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:23.802957 | orchestrator | 2026-04-04 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:26.841629 | orchestrator | 2026-04-04 01:01:26 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:26.843091 | orchestrator | 2026-04-04 01:01:26 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:26.844650 | orchestrator | 2026-04-04 01:01:26 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:26.845867 | orchestrator | 2026-04-04 01:01:26 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:26.845912 | orchestrator | 2026-04-04 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:29.884702 | orchestrator | 2026-04-04 01:01:29 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:29.884760 | orchestrator | 2026-04-04 01:01:29 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:29.886494 | orchestrator | 2026-04-04 01:01:29 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:29.886521 | orchestrator | 2026-04-04 01:01:29 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:29.886527 | orchestrator | 2026-04-04 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:32.921894 | orchestrator | 2026-04-04 01:01:32 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state STARTED 2026-04-04 01:01:32.922819 | orchestrator | 2026-04-04 01:01:32 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:32.926125 | orchestrator | 2026-04-04 01:01:32 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:32.929662 | orchestrator | 2026-04-04 01:01:32 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:32.929863 | orchestrator | 2026-04-04 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:35.968788 | orchestrator | 2026-04-04 01:01:35 | INFO  | Task 97ff843f-29b9-4cad-b32e-d16c635726ce is in state SUCCESS 2026-04-04 01:01:35.971287 | orchestrator | 2026-04-04 01:01:35.971341 | orchestrator | 2026-04-04 01:01:35.971353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:01:35.971363 | orchestrator | 2026-04-04 01:01:35.971370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:01:35.971375 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:00.385) 0:00:00.385 ******** 2026-04-04 01:01:35.971381 | orchestrator | ok: [testbed-manager] 2026-04-04 01:01:35.971386 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:01:35.971391 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:01:35.971396 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:01:35.971401 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:01:35.971406 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:01:35.971411 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:01:35.971415 | orchestrator | 2026-04-04 01:01:35.971420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:01:35.971425 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:00.636) 0:00:01.021 ******** 2026-04-04 01:01:35.971430 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971449 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971455 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971460 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971465 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971469 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971474 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-04 01:01:35.971479 | orchestrator | 2026-04-04 01:01:35.971484 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-04 01:01:35.971489 | orchestrator | 2026-04-04 01:01:35.971494 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-04 01:01:35.971499 | orchestrator | Saturday 04 April 2026 00:59:43 +0000 (0:00:00.749) 0:00:01.771 ******** 2026-04-04 01:01:35.971505 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:01:35.971510 | orchestrator | 2026-04-04 01:01:35.971515 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-04-04 01:01:35.971520 | orchestrator | Saturday 04 April 2026 00:59:44 +0000 (0:00:01.371) 0:00:03.143 ******** 2026-04-04 01:01:35.971525 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-04 01:01:35.971530 | orchestrator | 2026-04-04 01:01:35.971535 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-04-04 01:01:35.971540 | orchestrator | Saturday 04 April 2026 00:59:48 +0000 (0:00:03.957) 0:00:07.100 ******** 2026-04-04 01:01:35.971545 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-04 01:01:35.971551 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-04 01:01:35.971556 | orchestrator | 2026-04-04 01:01:35.971561 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-04 01:01:35.971566 | orchestrator | Saturday 04 April 2026 00:59:55 +0000 (0:00:07.032) 0:00:14.133 ******** 2026-04-04 01:01:35.971571 | orchestrator | changed: [testbed-manager] => (item=service) 2026-04-04 01:01:35.971576 | orchestrator | 2026-04-04 01:01:35.971580 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-04 01:01:35.971585 | orchestrator | Saturday 04 April 2026 00:59:59 +0000 (0:00:03.658) 0:00:17.792 ******** 2026-04-04 01:01:35.971590 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-04 01:01:35.971595 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:01:35.971599 | orchestrator | 2026-04-04 01:01:35.971604 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-04 01:01:35.971609 | orchestrator | Saturday 04 April 2026 01:00:03 +0000 (0:00:03.625) 0:00:21.417 ******** 2026-04-04 01:01:35.971614 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-04 01:01:35.971632 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-04 01:01:35.971638 | orchestrator | 2026-04-04 01:01:35.971643 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-04-04 01:01:35.971647 | orchestrator | Saturday 04 April 2026 01:00:08 +0000 (0:00:05.827) 0:00:27.244 ******** 2026-04-04 01:01:35.971652 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-04 01:01:35.971657 | orchestrator | 2026-04-04 01:01:35.971661 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:01:35.971667 | orchestrator | testbed-manager : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.971672 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.971765 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.972059 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.972073 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.972086 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.972091 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:01:35.972096 | orchestrator | 2026-04-04 01:01:35.972101 | orchestrator | 2026-04-04 01:01:35.972106 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:01:35.972111 | orchestrator | Saturday 04 April 2026 01:00:13 +0000 (0:00:04.550) 0:00:31.795 ******** 2026-04-04 01:01:35.972116 | orchestrator | =============================================================================== 2026-04-04 01:01:35.972121 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 7.03s 2026-04-04 01:01:35.972125 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.83s 2026-04-04 01:01:35.972130 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 4.55s 2026-04-04 01:01:35.972135 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 3.96s 2026-04-04 01:01:35.972140 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.66s 2026-04-04 01:01:35.972144 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.63s 2026-04-04 01:01:35.972149 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.37s 2026-04-04 01:01:35.972154 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-04-04 01:01:35.972159 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-04-04 01:01:35.972164 | orchestrator | 2026-04-04 01:01:35.972168 | orchestrator | 2026-04-04 01:01:35.972173 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:01:35.972178 | orchestrator | 2026-04-04 01:01:35.972183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:01:35.972188 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:00.309) 0:00:00.309 ******** 2026-04-04 01:01:35.972193 | orchestrator | ok: [testbed-manager] 2026-04-04 01:01:35.972197 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:01:35.972202 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:01:35.972285 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:01:35.972370 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:01:35.972379 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:01:35.972388 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:01:35.972394 | orchestrator | 2026-04-04 01:01:35.972399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:01:35.972406 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:00.685) 0:00:00.995 ******** 2026-04-04 01:01:35.972674 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972686 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972691 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972696 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972702 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972707 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972712 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-04 01:01:35.972717 | orchestrator | 2026-04-04 01:01:35.972722 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-04 01:01:35.972734 | orchestrator | 2026-04-04 01:01:35.972739 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-04 01:01:35.972747 | orchestrator | Saturday 04 April 2026 00:58:58 +0000 (0:00:00.835) 0:00:01.830 ******** 2026-04-04 01:01:35.972755 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:01:35.972765 | orchestrator | 2026-04-04 01:01:35.972774 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-04 01:01:35.972783 | orchestrator | Saturday 04 April 2026 00:58:59 +0000 (0:00:01.200) 0:00:03.030 ******** 2026-04-04 01:01:35.972793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.972810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.972844 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 01:01:35.972851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.972857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.972866 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.972906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.972913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.972925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973206 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973313 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:01:35.973325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973361 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973419 | orchestrator | 2026-04-04 01:01:35.973424 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-04 01:01:35.973429 | orchestrator | Saturday 04 April 2026 00:59:03 +0000 (0:00:03.875) 0:00:06.906 ******** 2026-04-04 01:01:35.973435 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:01:35.973440 | orchestrator | 2026-04-04 01:01:35.973445 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-04 01:01:35.973450 | orchestrator | Saturday 04 April 2026 00:59:05 +0000 (0:00:01.713) 0:00:08.620 ******** 2026-04-04 01:01:35.973455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973552 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 01:01:35.973567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.973644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973695 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973754 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:01:35.973760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.973806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.973816 | orchestrator | 2026-04-04 01:01:35.973821 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-04 01:01:35.973826 | orchestrator | Saturday 04 April 2026 00:59:11 +0000 (0:00:05.782) 0:00:14.403 ******** 2026-04-04 01:01:35.973834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 01:01:35.973854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.973864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.973869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.973874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.973920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.973926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.973932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973937 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.973942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.973953 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:01:35.973973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973983 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.973988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.973998 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.974003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974119 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.974124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974129 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.974135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974150 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.974155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974160 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.974165 | orchestrator | 2026-04-04 01:01:35.974170 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-04 01:01:35.974179 | orchestrator | Saturday 04 April 2026 00:59:13 +0000 (0:00:02.125) 0:00:16.529 ******** 2026-04-04 01:01:35.974202 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 01:01:35.974210 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:01:35.974285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974311 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.974317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974347 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.974368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974415 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.974424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974456 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.974462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974468 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.974474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.974480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.974487 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.974496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.974509 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.974515 | orchestrator | 2026-04-04 01:01:35.974521 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-04 01:01:35.974526 | orchestrator | Saturday 04 April 2026 00:59:16 +0000 (0:00:02.846) 0:00:19.375 ******** 2026-04-04 01:01:35.974549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 01:01:35.974556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.974601 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974630 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:01:35.974638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.974722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.974737 | orchestrator | 2026-04-04 01:01:35.974742 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-04 01:01:35.974747 | orchestrator | Saturday 04 April 2026 00:59:22 +0000 (0:00:06.629) 0:00:26.005 ******** 2026-04-04 01:01:35.974752 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:01:35.974757 | orchestrator | 2026-04-04 01:01:35.974762 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-04 01:01:35.974767 | orchestrator | Saturday 04 April 2026 00:59:23 +0000 (0:00:00.861) 0:00:26.866 ******** 2026-04-04 01:01:35.974771 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.974776 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.974781 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.974786 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.974791 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.974796 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.974801 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.974805 | orchestrator | 2026-04-04 01:01:35.974810 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-04 01:01:35.974818 | orchestrator | Saturday 04 April 2026 00:59:24 +0000 (0:00:00.723) 0:00:27.590 ******** 2026-04-04 01:01:35.974823 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:01:35.974827 | orchestrator | 2026-04-04 01:01:35.974832 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-04 01:01:35.974837 | orchestrator | Saturday 04 April 2026 00:59:25 +0000 (0:00:00.685) 0:00:28.275 ******** 2026-04-04 01:01:35.974843 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.974850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974856 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.974861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974865 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.974928 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:01:35.974934 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.974939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974944 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.974948 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974954 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.974959 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:01:35.974964 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.974968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974973 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.974978 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.974983 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.974988 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 01:01:35.974993 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.974998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975003 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.975008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975013 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.975018 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 01:01:35.975023 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.975028 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975033 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.975038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975043 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.975048 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:01:35.975053 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.975058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975062 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.975067 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975074 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.975083 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:01:35.975092 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.975100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975107 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-04 01:01:35.975114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:01:35.975122 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-04 01:01:35.975130 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:01:35.975138 | orchestrator | 2026-04-04 01:01:35.975147 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-04 01:01:35.975155 | orchestrator | Saturday 04 April 2026 00:59:26 +0000 (0:00:01.590) 0:00:29.866 ******** 2026-04-04 01:01:35.975164 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975172 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975179 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975184 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975189 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975198 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975203 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975208 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975233 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975239 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975244 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:01:35.975248 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975254 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-04 01:01:35.975258 | orchestrator | 2026-04-04 01:01:35.975263 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-04 01:01:35.975271 | orchestrator | Saturday 04 April 2026 00:59:40 +0000 (0:00:14.039) 0:00:43.905 ******** 2026-04-04 01:01:35.975276 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975281 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975288 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975304 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975316 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975324 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975333 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975340 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975348 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975355 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975363 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:01:35.975370 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975378 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-04 01:01:35.975386 | orchestrator | 2026-04-04 01:01:35.975394 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-04 01:01:35.975403 | orchestrator | Saturday 04 April 2026 00:59:43 +0000 (0:00:03.001) 0:00:46.907 ******** 2026-04-04 01:01:35.975411 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975420 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975427 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975432 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975437 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975441 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975446 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975451 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975456 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975461 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975466 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:01:35.975471 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975476 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-04 01:01:35.975487 | orchestrator | 2026-04-04 01:01:35.975492 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-04 01:01:35.975497 | orchestrator | Saturday 04 April 2026 00:59:45 +0000 (0:00:01.448) 0:00:48.356 ******** 2026-04-04 01:01:35.975502 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:01:35.975507 | orchestrator | 2026-04-04 01:01:35.975512 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-04 01:01:35.975517 | orchestrator | Saturday 04 April 2026 00:59:45 +0000 (0:00:00.720) 0:00:49.076 ******** 2026-04-04 01:01:35.975522 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.975527 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975532 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975537 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975542 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975547 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975552 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975557 | orchestrator | 2026-04-04 01:01:35.975562 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-04 01:01:35.975568 | orchestrator | Saturday 04 April 2026 00:59:46 +0000 (0:00:00.730) 0:00:49.807 ******** 2026-04-04 01:01:35.975574 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.975580 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975586 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975592 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975599 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.975608 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.975620 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.975628 | orchestrator | 2026-04-04 01:01:35.975636 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-04 01:01:35.975645 | orchestrator | Saturday 04 April 2026 00:59:48 +0000 (0:00:02.226) 0:00:52.034 ******** 2026-04-04 01:01:35.975653 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975661 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.975669 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975677 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975685 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975693 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975707 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975716 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975725 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975734 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975743 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975758 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975766 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:01:35.975772 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975778 | orchestrator | 2026-04-04 01:01:35.975784 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-04 01:01:35.975790 | orchestrator | Saturday 04 April 2026 00:59:50 +0000 (0:00:01.372) 0:00:53.407 ******** 2026-04-04 01:01:35.975796 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975801 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975806 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975816 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975821 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975826 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975831 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975836 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975841 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975845 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975850 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:01:35.975855 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975860 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-04 01:01:35.975865 | orchestrator | 2026-04-04 01:01:35.975870 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-04 01:01:35.975875 | orchestrator | Saturday 04 April 2026 00:59:51 +0000 (0:00:01.474) 0:00:54.881 ******** 2026-04-04 01:01:35.975880 | orchestrator | [WARNING]: Skipped 2026-04-04 01:01:35.975885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-04 01:01:35.975889 | orchestrator | due to this access issue: 2026-04-04 01:01:35.975894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-04 01:01:35.975899 | orchestrator | not a directory 2026-04-04 01:01:35.975904 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:01:35.975909 | orchestrator | 2026-04-04 01:01:35.975914 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-04 01:01:35.975919 | orchestrator | Saturday 04 April 2026 00:59:52 +0000 (0:00:01.009) 0:00:55.891 ******** 2026-04-04 01:01:35.975924 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.975929 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975934 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975939 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975944 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975948 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.975953 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.975958 | orchestrator | 2026-04-04 01:01:35.975963 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-04 01:01:35.975968 | orchestrator | Saturday 04 April 2026 00:59:53 +0000 (0:00:00.610) 0:00:56.501 ******** 2026-04-04 01:01:35.975973 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.975978 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.975983 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.975987 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.975992 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.975997 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.976002 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.976007 | orchestrator | 2026-04-04 01:01:35.976012 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-04 01:01:35.976016 | orchestrator | Saturday 04 April 2026 00:59:54 +0000 (0:00:00.714) 0:00:57.216 ******** 2026-04-04 01:01:35.976022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976039 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-04 01:01:35.976045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976082 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:01:35.976090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976135 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976169 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:01:35.976183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:01:35.976194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:01:35.976232 | orchestrator | 2026-04-04 01:01:35.976237 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-04 01:01:35.976242 | orchestrator | Saturday 04 April 2026 00:59:58 +0000 (0:00:04.009) 0:01:01.225 ******** 2026-04-04 01:01:35.976247 | orchestrator | changed: [testbed-manager] => { 2026-04-04 01:01:35.976252 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976258 | orchestrator | } 2026-04-04 01:01:35.976262 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:01:35.976267 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976272 | orchestrator | } 2026-04-04 01:01:35.976277 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:01:35.976282 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976286 | orchestrator | } 2026-04-04 01:01:35.976291 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:01:35.976296 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976301 | orchestrator | } 2026-04-04 01:01:35.976306 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 01:01:35.976310 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976315 | orchestrator | } 2026-04-04 01:01:35.976320 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 01:01:35.976325 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976330 | orchestrator | } 2026-04-04 01:01:35.976335 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 01:01:35.976339 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:01:35.976344 | orchestrator | } 2026-04-04 01:01:35.976349 | orchestrator | 2026-04-04 01:01:35.976354 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:01:35.976359 | orchestrator | Saturday 04 April 2026 00:59:58 +0000 (0:00:00.688) 0:01:01.913 ******** 2026-04-04 01:01:35.976369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-04 01:01:35.976439 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:01:35.976444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:01:35.976467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:01:35.976503 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:01:35.976508 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.976513 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:01:35.976518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976539 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:01:35.976544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976563 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:01:35.976568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:01:35.976573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:01:35.976583 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:01:35.976588 | orchestrator | 2026-04-04 01:01:35.976593 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-04 01:01:35.976601 | orchestrator | Saturday 04 April 2026 01:00:01 +0000 (0:00:02.207) 0:01:04.120 ******** 2026-04-04 01:01:35.976610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 01:01:35.976617 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:01:35.976625 | orchestrator | 2026-04-04 01:01:35.976633 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976748 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:01.145) 0:01:05.266 ******** 2026-04-04 01:01:35.976761 | orchestrator | 2026-04-04 01:01:35.976770 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976778 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.258) 0:01:05.525 ******** 2026-04-04 01:01:35.976786 | orchestrator | 2026-04-04 01:01:35.976794 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976802 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.063) 0:01:05.588 ******** 2026-04-04 01:01:35.976818 | orchestrator | 2026-04-04 01:01:35.976827 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976835 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.062) 0:01:05.651 ******** 2026-04-04 01:01:35.976844 | orchestrator | 2026-04-04 01:01:35.976852 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976860 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.063) 0:01:05.714 ******** 2026-04-04 01:01:35.976869 | orchestrator | 2026-04-04 01:01:35.976876 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976882 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.060) 0:01:05.775 ******** 2026-04-04 01:01:35.976886 | orchestrator | 2026-04-04 01:01:35.976891 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:01:35.976897 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.074) 0:01:05.850 ******** 2026-04-04 01:01:35.976901 | orchestrator | 2026-04-04 01:01:35.976906 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-04 01:01:35.976911 | orchestrator | Saturday 04 April 2026 01:00:02 +0000 (0:00:00.088) 0:01:05.938 ******** 2026-04-04 01:01:35.976916 | orchestrator | changed: [testbed-manager] 2026-04-04 01:01:35.976921 | orchestrator | 2026-04-04 01:01:35.976926 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-04 01:01:35.976931 | orchestrator | Saturday 04 April 2026 01:00:18 +0000 (0:00:15.752) 0:01:21.691 ******** 2026-04-04 01:01:35.976936 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.976940 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:01:35.976945 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.976950 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:01:35.976955 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.976960 | orchestrator | changed: [testbed-manager] 2026-04-04 01:01:35.976965 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:01:35.976969 | orchestrator | 2026-04-04 01:01:35.976974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-04 01:01:35.976979 | orchestrator | Saturday 04 April 2026 01:00:32 +0000 (0:00:14.341) 0:01:36.032 ******** 2026-04-04 01:01:35.976984 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.976989 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.976994 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.976999 | orchestrator | 2026-04-04 01:01:35.977004 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-04 01:01:35.977009 | orchestrator | Saturday 04 April 2026 01:00:38 +0000 (0:00:05.380) 0:01:41.413 ******** 2026-04-04 01:01:35.977013 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.977018 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.977023 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.977028 | orchestrator | 2026-04-04 01:01:35.977033 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-04 01:01:35.977038 | orchestrator | Saturday 04 April 2026 01:00:48 +0000 (0:00:10.441) 0:01:51.854 ******** 2026-04-04 01:01:35.977043 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.977048 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.977052 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.977057 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:01:35.977062 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:01:35.977067 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:01:35.977072 | orchestrator | changed: [testbed-manager] 2026-04-04 01:01:35.977077 | orchestrator | 2026-04-04 01:01:35.977081 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-04 01:01:35.977086 | orchestrator | Saturday 04 April 2026 01:01:01 +0000 (0:00:13.017) 0:02:04.872 ******** 2026-04-04 01:01:35.977091 | orchestrator | changed: [testbed-manager] 2026-04-04 01:01:35.977096 | orchestrator | 2026-04-04 01:01:35.977101 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-04 01:01:35.977110 | orchestrator | Saturday 04 April 2026 01:01:08 +0000 (0:00:06.411) 0:02:11.283 ******** 2026-04-04 01:01:35.977115 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:01:35.977120 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:01:35.977125 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:01:35.977130 | orchestrator | 2026-04-04 01:01:35.977134 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-04 01:01:35.977139 | orchestrator | Saturday 04 April 2026 01:01:18 +0000 (0:00:10.530) 0:02:21.813 ******** 2026-04-04 01:01:35.977144 | orchestrator | changed: [testbed-manager] 2026-04-04 01:01:35.977149 | orchestrator | 2026-04-04 01:01:35.977154 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-04 01:01:35.977159 | orchestrator | Saturday 04 April 2026 01:01:23 +0000 (0:00:04.602) 0:02:26.416 ******** 2026-04-04 01:01:35.977164 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:01:35.977168 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:01:35.977173 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:01:35.977178 | orchestrator | 2026-04-04 01:01:35.977183 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:01:35.977194 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-04 01:01:35.977199 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:01:35.977208 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:01:35.977321 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:01:35.977339 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:01:35.977344 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:01:35.977349 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:01:35.977354 | orchestrator | 2026-04-04 01:01:35.977359 | orchestrator | 2026-04-04 01:01:35.977364 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:01:35.977369 | orchestrator | Saturday 04 April 2026 01:01:33 +0000 (0:00:10.445) 0:02:36.861 ******** 2026-04-04 01:01:35.977374 | orchestrator | =============================================================================== 2026-04-04 01:01:35.977380 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.75s 2026-04-04 01:01:35.977386 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.34s 2026-04-04 01:01:35.977392 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.04s 2026-04-04 01:01:35.977398 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.02s 2026-04-04 01:01:35.977404 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.53s 2026-04-04 01:01:35.977410 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.45s 2026-04-04 01:01:35.977415 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.44s 2026-04-04 01:01:35.977421 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.63s 2026-04-04 01:01:35.977427 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.41s 2026-04-04 01:01:35.977433 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.78s 2026-04-04 01:01:35.977439 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.38s 2026-04-04 01:01:35.977451 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.60s 2026-04-04 01:01:35.977457 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.01s 2026-04-04 01:01:35.977462 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.88s 2026-04-04 01:01:35.977468 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.00s 2026-04-04 01:01:35.977474 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.85s 2026-04-04 01:01:35.977487 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.23s 2026-04-04 01:01:35.977492 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.21s 2026-04-04 01:01:35.977497 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.13s 2026-04-04 01:01:35.977502 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.71s 2026-04-04 01:01:35.977507 | orchestrator | 2026-04-04 01:01:35 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:35.977512 | orchestrator | 2026-04-04 01:01:35 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:35.977517 | orchestrator | 2026-04-04 01:01:35 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:35.980054 | orchestrator | 2026-04-04 01:01:35 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:35.980118 | orchestrator | 2026-04-04 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:39.100589 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:39.102669 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:39.104474 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:39.106239 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:39.106296 | orchestrator | 2026-04-04 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:42.145000 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:42.145090 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:42.145380 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:42.146179 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:42.146314 | orchestrator | 2026-04-04 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:45.182948 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:45.185518 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:45.187195 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:45.188304 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:45.188464 | orchestrator | 2026-04-04 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:48.231062 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:48.232102 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:48.233076 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:48.233937 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:48.233964 | orchestrator | 2026-04-04 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:51.274526 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:51.276338 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:51.277740 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:51.279916 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:51.279978 | orchestrator | 2026-04-04 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:54.318436 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:54.319545 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:54.320769 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:54.322135 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:54.322358 | orchestrator | 2026-04-04 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:57.366008 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:01:57.366124 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:01:57.366833 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:01:57.367455 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:01:57.367478 | orchestrator | 2026-04-04 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:00.400752 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:00.401312 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:00.402213 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:00.403591 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:00.403854 | orchestrator | 2026-04-04 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:03.438518 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:03.439152 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:03.440544 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:03.441298 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:03.441384 | orchestrator | 2026-04-04 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:06.466217 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:06.466761 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:06.468583 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:06.469080 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:06.469257 | orchestrator | 2026-04-04 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:09.510267 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:09.511932 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:09.513123 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:09.514085 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:09.514120 | orchestrator | 2026-04-04 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:12.543409 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:12.543747 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:12.544580 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:12.545344 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:12.545380 | orchestrator | 2026-04-04 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:15.578776 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:15.579103 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:15.579710 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:15.580404 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:15.580558 | orchestrator | 2026-04-04 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:18.616256 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:18.618111 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:18.620486 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:18.622686 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:18.622783 | orchestrator | 2026-04-04 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:21.657620 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:21.658249 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:21.659112 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:21.659896 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:21.659920 | orchestrator | 2026-04-04 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:24.689421 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:24.689632 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:24.690535 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state STARTED 2026-04-04 01:02:24.691044 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:24.691074 | orchestrator | 2026-04-04 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:27.719710 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:27.721053 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:27.722271 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task 504e374c-a1c6-498d-a4d7-130ad1279380 is in state SUCCESS 2026-04-04 01:02:27.723519 | orchestrator | 2026-04-04 01:02:27.723559 | orchestrator | 2026-04-04 01:02:27.723567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:02:27.723575 | orchestrator | 2026-04-04 01:02:27.723581 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:02:27.723589 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:00.315) 0:00:00.315 ******** 2026-04-04 01:02:27.723595 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:02:27.723603 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:02:27.723610 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:02:27.723616 | orchestrator | 2026-04-04 01:02:27.723622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:02:27.723629 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:00.286) 0:00:00.602 ******** 2026-04-04 01:02:27.723635 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-04 01:02:27.723642 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-04 01:02:27.723648 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-04 01:02:27.723655 | orchestrator | 2026-04-04 01:02:27.723661 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-04 01:02:27.723667 | orchestrator | 2026-04-04 01:02:27.723673 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:02:27.723680 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:00.267) 0:00:00.870 ******** 2026-04-04 01:02:27.723687 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:02:27.723693 | orchestrator | 2026-04-04 01:02:27.723700 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-04-04 01:02:27.723707 | orchestrator | Saturday 04 April 2026 00:59:43 +0000 (0:00:00.541) 0:00:01.412 ******** 2026-04-04 01:02:27.723844 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-04 01:02:27.723851 | orchestrator | 2026-04-04 01:02:27.723857 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-04-04 01:02:27.723864 | orchestrator | Saturday 04 April 2026 00:59:48 +0000 (0:00:04.598) 0:00:06.010 ******** 2026-04-04 01:02:27.723870 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-04 01:02:27.723877 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-04 01:02:27.723884 | orchestrator | 2026-04-04 01:02:27.723890 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-04 01:02:27.723915 | orchestrator | Saturday 04 April 2026 00:59:56 +0000 (0:00:08.217) 0:00:14.227 ******** 2026-04-04 01:02:27.723922 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:02:27.723929 | orchestrator | 2026-04-04 01:02:27.723935 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-04 01:02:27.723942 | orchestrator | Saturday 04 April 2026 00:59:59 +0000 (0:00:03.435) 0:00:17.663 ******** 2026-04-04 01:02:27.723948 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-04 01:02:27.723954 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:02:27.723960 | orchestrator | 2026-04-04 01:02:27.723967 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-04 01:02:27.723973 | orchestrator | Saturday 04 April 2026 01:00:03 +0000 (0:00:04.009) 0:00:21.673 ******** 2026-04-04 01:02:27.723979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:02:27.723986 | orchestrator | 2026-04-04 01:02:27.723992 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-04-04 01:02:27.723998 | orchestrator | Saturday 04 April 2026 01:00:07 +0000 (0:00:03.294) 0:00:24.967 ******** 2026-04-04 01:02:27.724004 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-04 01:02:27.724010 | orchestrator | 2026-04-04 01:02:27.724016 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-04 01:02:27.724074 | orchestrator | Saturday 04 April 2026 01:00:11 +0000 (0:00:04.254) 0:00:29.221 ******** 2026-04-04 01:02:27.724106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724153 | orchestrator | 2026-04-04 01:02:27.724160 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:02:27.724167 | orchestrator | Saturday 04 April 2026 01:00:15 +0000 (0:00:04.584) 0:00:33.806 ******** 2026-04-04 01:02:27.724179 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:02:27.724187 | orchestrator | 2026-04-04 01:02:27.724193 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-04 01:02:27.724200 | orchestrator | Saturday 04 April 2026 01:00:16 +0000 (0:00:00.764) 0:00:34.570 ******** 2026-04-04 01:02:27.724207 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:27.724213 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:27.724219 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.724226 | orchestrator | 2026-04-04 01:02:27.724232 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-04 01:02:27.724238 | orchestrator | Saturday 04 April 2026 01:00:23 +0000 (0:00:06.456) 0:00:41.027 ******** 2026-04-04 01:02:27.724245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724271 | orchestrator | 2026-04-04 01:02:27.724278 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-04 01:02:27.724285 | orchestrator | Saturday 04 April 2026 01:00:25 +0000 (0:00:02.317) 0:00:43.345 ******** 2026-04-04 01:02:27.724292 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724299 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-04 01:02:27.724313 | orchestrator | 2026-04-04 01:02:27.724319 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-04 01:02:27.724325 | orchestrator | Saturday 04 April 2026 01:00:26 +0000 (0:00:01.472) 0:00:44.818 ******** 2026-04-04 01:02:27.724332 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:02:27.724339 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:02:27.724346 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:02:27.724351 | orchestrator | 2026-04-04 01:02:27.724358 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-04 01:02:27.724364 | orchestrator | Saturday 04 April 2026 01:00:27 +0000 (0:00:00.620) 0:00:45.438 ******** 2026-04-04 01:02:27.724370 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724377 | orchestrator | 2026-04-04 01:02:27.724383 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-04 01:02:27.724389 | orchestrator | Saturday 04 April 2026 01:00:27 +0000 (0:00:00.113) 0:00:45.552 ******** 2026-04-04 01:02:27.724396 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724403 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724410 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724416 | orchestrator | 2026-04-04 01:02:27.724422 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:02:27.724429 | orchestrator | Saturday 04 April 2026 01:00:27 +0000 (0:00:00.225) 0:00:45.778 ******** 2026-04-04 01:02:27.724435 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:02:27.724442 | orchestrator | 2026-04-04 01:02:27.724448 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-04 01:02:27.724454 | orchestrator | Saturday 04 April 2026 01:00:28 +0000 (0:00:00.452) 0:00:46.230 ******** 2026-04-04 01:02:27.724470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724504 | orchestrator | 2026-04-04 01:02:27.724511 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-04 01:02:27.724518 | orchestrator | Saturday 04 April 2026 01:00:31 +0000 (0:00:03.026) 0:00:49.256 ******** 2026-04-04 01:02:27.724532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724540 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724563 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724586 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724592 | orchestrator | 2026-04-04 01:02:27.724598 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-04 01:02:27.724604 | orchestrator | Saturday 04 April 2026 01:00:34 +0000 (0:00:03.096) 0:00:52.353 ******** 2026-04-04 01:02:27.724611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724618 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724642 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.724655 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724661 | orchestrator | 2026-04-04 01:02:27.724668 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-04 01:02:27.724674 | orchestrator | Saturday 04 April 2026 01:00:37 +0000 (0:00:03.176) 0:00:55.529 ******** 2026-04-04 01:02:27.724679 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724685 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724692 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724698 | orchestrator | 2026-04-04 01:02:27.724704 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-04 01:02:27.724711 | orchestrator | Saturday 04 April 2026 01:00:41 +0000 (0:00:03.701) 0:00:59.231 ******** 2026-04-04 01:02:27.724724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.724754 | orchestrator | 2026-04-04 01:02:27.724761 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-04 01:02:27.724770 | orchestrator | Saturday 04 April 2026 01:00:44 +0000 (0:00:03.464) 0:01:02.696 ******** 2026-04-04 01:02:27.724775 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.724782 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:27.724787 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:27.724794 | orchestrator | 2026-04-04 01:02:27.724799 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-04 01:02:27.724806 | orchestrator | Saturday 04 April 2026 01:00:50 +0000 (0:00:05.310) 0:01:08.006 ******** 2026-04-04 01:02:27.724811 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724817 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724823 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724829 | orchestrator | 2026-04-04 01:02:27.724835 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-04 01:02:27.724841 | orchestrator | Saturday 04 April 2026 01:00:54 +0000 (0:00:04.675) 0:01:12.682 ******** 2026-04-04 01:02:27.724847 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724853 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724859 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724865 | orchestrator | 2026-04-04 01:02:27.724872 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-04 01:02:27.724879 | orchestrator | Saturday 04 April 2026 01:00:58 +0000 (0:00:03.773) 0:01:16.455 ******** 2026-04-04 01:02:27.724886 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724894 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724901 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724909 | orchestrator | 2026-04-04 01:02:27.724916 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-04 01:02:27.724923 | orchestrator | Saturday 04 April 2026 01:01:01 +0000 (0:00:02.883) 0:01:19.339 ******** 2026-04-04 01:02:27.724930 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724938 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724944 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.724952 | orchestrator | 2026-04-04 01:02:27.724958 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-04 01:02:27.724964 | orchestrator | Saturday 04 April 2026 01:01:01 +0000 (0:00:00.266) 0:01:19.606 ******** 2026-04-04 01:02:27.724970 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:02:27.724977 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.724984 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:02:27.724990 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.724996 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:02:27.725009 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.725015 | orchestrator | 2026-04-04 01:02:27.725022 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-04 01:02:27.725029 | orchestrator | Saturday 04 April 2026 01:01:05 +0000 (0:00:03.601) 0:01:23.207 ******** 2026-04-04 01:02:27.725036 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.725043 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.725048 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.725055 | orchestrator | 2026-04-04 01:02:27.725062 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-04 01:02:27.725069 | orchestrator | Saturday 04 April 2026 01:01:08 +0000 (0:00:03.721) 0:01:26.929 ******** 2026-04-04 01:02:27.725076 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.725083 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.725090 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.725097 | orchestrator | 2026-04-04 01:02:27.725104 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-04 01:02:27.725111 | orchestrator | Saturday 04 April 2026 01:01:12 +0000 (0:00:03.777) 0:01:30.707 ******** 2026-04-04 01:02:27.725127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.725136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.725167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:02:27.725175 | orchestrator | 2026-04-04 01:02:27.725181 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-04 01:02:27.725187 | orchestrator | Saturday 04 April 2026 01:01:16 +0000 (0:00:03.762) 0:01:34.469 ******** 2026-04-04 01:02:27.725194 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:02:27.725201 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:02:27.725208 | orchestrator | } 2026-04-04 01:02:27.725215 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:02:27.725221 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:02:27.725227 | orchestrator | } 2026-04-04 01:02:27.725237 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:02:27.725244 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:02:27.725250 | orchestrator | } 2026-04-04 01:02:27.725256 | orchestrator | 2026-04-04 01:02:27.725262 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:02:27.725269 | orchestrator | Saturday 04 April 2026 01:01:16 +0000 (0:00:00.366) 0:01:34.836 ******** 2026-04-04 01:02:27.725276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.725290 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.725300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.725307 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.725319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:02:27.725330 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.725336 | orchestrator | 2026-04-04 01:02:27.725343 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:02:27.725350 | orchestrator | Saturday 04 April 2026 01:01:19 +0000 (0:00:02.617) 0:01:37.453 ******** 2026-04-04 01:02:27.725356 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:27.725363 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:27.725370 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:27.725376 | orchestrator | 2026-04-04 01:02:27.725383 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-04 01:02:27.725389 | orchestrator | Saturday 04 April 2026 01:01:19 +0000 (0:00:00.243) 0:01:37.696 ******** 2026-04-04 01:02:27.725395 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725401 | orchestrator | 2026-04-04 01:02:27.725407 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-04 01:02:27.725414 | orchestrator | Saturday 04 April 2026 01:01:21 +0000 (0:00:02.172) 0:01:39.869 ******** 2026-04-04 01:02:27.725420 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725426 | orchestrator | 2026-04-04 01:02:27.725432 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-04 01:02:27.725439 | orchestrator | Saturday 04 April 2026 01:01:24 +0000 (0:00:02.565) 0:01:42.435 ******** 2026-04-04 01:02:27.725445 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725451 | orchestrator | 2026-04-04 01:02:27.725457 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-04 01:02:27.725463 | orchestrator | Saturday 04 April 2026 01:01:27 +0000 (0:00:02.592) 0:01:45.027 ******** 2026-04-04 01:02:27.725469 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725475 | orchestrator | 2026-04-04 01:02:27.725481 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-04 01:02:27.725487 | orchestrator | Saturday 04 April 2026 01:01:53 +0000 (0:00:26.854) 0:02:11.882 ******** 2026-04-04 01:02:27.725493 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725499 | orchestrator | 2026-04-04 01:02:27.725505 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:02:27.725511 | orchestrator | Saturday 04 April 2026 01:01:55 +0000 (0:00:01.957) 0:02:13.839 ******** 2026-04-04 01:02:27.725517 | orchestrator | 2026-04-04 01:02:27.725523 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:02:27.725530 | orchestrator | Saturday 04 April 2026 01:01:55 +0000 (0:00:00.057) 0:02:13.897 ******** 2026-04-04 01:02:27.725536 | orchestrator | 2026-04-04 01:02:27.725542 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:02:27.725548 | orchestrator | Saturday 04 April 2026 01:01:56 +0000 (0:00:00.059) 0:02:13.957 ******** 2026-04-04 01:02:27.725554 | orchestrator | 2026-04-04 01:02:27.725561 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-04 01:02:27.725567 | orchestrator | Saturday 04 April 2026 01:01:56 +0000 (0:00:00.084) 0:02:14.041 ******** 2026-04-04 01:02:27.725580 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:27.725586 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:27.725592 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:27.725598 | orchestrator | 2026-04-04 01:02:27.725604 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:02:27.725615 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-04 01:02:27.725622 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:02:27.725629 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:02:27.725635 | orchestrator | 2026-04-04 01:02:27.725641 | orchestrator | 2026-04-04 01:02:27.725651 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:02:27.725658 | orchestrator | Saturday 04 April 2026 01:02:26 +0000 (0:00:30.774) 0:02:44.816 ******** 2026-04-04 01:02:27.725664 | orchestrator | =============================================================================== 2026-04-04 01:02:27.725670 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.77s 2026-04-04 01:02:27.725675 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.85s 2026-04-04 01:02:27.725681 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 8.22s 2026-04-04 01:02:27.725687 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.46s 2026-04-04 01:02:27.725692 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.31s 2026-04-04 01:02:27.725698 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.68s 2026-04-04 01:02:27.725704 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.60s 2026-04-04 01:02:27.725710 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.58s 2026-04-04 01:02:27.725715 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.25s 2026-04-04 01:02:27.725721 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.01s 2026-04-04 01:02:27.725727 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.78s 2026-04-04 01:02:27.725733 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.77s 2026-04-04 01:02:27.725739 | orchestrator | service-check-containers : glance | Check containers -------------------- 3.76s 2026-04-04 01:02:27.725745 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.72s 2026-04-04 01:02:27.725751 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.70s 2026-04-04 01:02:27.725757 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.60s 2026-04-04 01:02:27.725762 | orchestrator | glance : Copying over config.json files for services -------------------- 3.46s 2026-04-04 01:02:27.725768 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.44s 2026-04-04 01:02:27.725774 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.29s 2026-04-04 01:02:27.725780 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.18s 2026-04-04 01:02:27.725786 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:27.725793 | orchestrator | 2026-04-04 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:30.749349 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:30.749765 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:30.750501 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:30.751426 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:30.751452 | orchestrator | 2026-04-04 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:33.781895 | orchestrator | 2026-04-04 01:02:33 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:33.782506 | orchestrator | 2026-04-04 01:02:33 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:33.783467 | orchestrator | 2026-04-04 01:02:33 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:33.784375 | orchestrator | 2026-04-04 01:02:33 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:33.784402 | orchestrator | 2026-04-04 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:36.809152 | orchestrator | 2026-04-04 01:02:36 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:36.810229 | orchestrator | 2026-04-04 01:02:36 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:36.812111 | orchestrator | 2026-04-04 01:02:36 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:36.812480 | orchestrator | 2026-04-04 01:02:36 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:36.812514 | orchestrator | 2026-04-04 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:39.849374 | orchestrator | 2026-04-04 01:02:39 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:39.850879 | orchestrator | 2026-04-04 01:02:39 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:39.852570 | orchestrator | 2026-04-04 01:02:39 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:39.854184 | orchestrator | 2026-04-04 01:02:39 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:39.854233 | orchestrator | 2026-04-04 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:42.878879 | orchestrator | 2026-04-04 01:02:42 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:42.879209 | orchestrator | 2026-04-04 01:02:42 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:42.879963 | orchestrator | 2026-04-04 01:02:42 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:42.880437 | orchestrator | 2026-04-04 01:02:42 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:42.880454 | orchestrator | 2026-04-04 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:45.916735 | orchestrator | 2026-04-04 01:02:45 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:45.918279 | orchestrator | 2026-04-04 01:02:45 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:45.918611 | orchestrator | 2026-04-04 01:02:45 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:45.919995 | orchestrator | 2026-04-04 01:02:45 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:45.920024 | orchestrator | 2026-04-04 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:48.956320 | orchestrator | 2026-04-04 01:02:48 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:48.956482 | orchestrator | 2026-04-04 01:02:48 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:48.957230 | orchestrator | 2026-04-04 01:02:48 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:48.957867 | orchestrator | 2026-04-04 01:02:48 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:48.957885 | orchestrator | 2026-04-04 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:51.984150 | orchestrator | 2026-04-04 01:02:51 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:51.984575 | orchestrator | 2026-04-04 01:02:51 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:51.986216 | orchestrator | 2026-04-04 01:02:51 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:51.986931 | orchestrator | 2026-04-04 01:02:51 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:51.986967 | orchestrator | 2026-04-04 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:55.023601 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:55.025988 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:55.026431 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:55.027224 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:55.033498 | orchestrator | 2026-04-04 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:58.101459 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:02:58.101508 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:02:58.101523 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:02:58.101527 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:02:58.101531 | orchestrator | 2026-04-04 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:01.084797 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:01.085037 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state STARTED 2026-04-04 01:03:01.085877 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:01.087703 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:01.087748 | orchestrator | 2026-04-04 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:04.110672 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:04.112429 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task 72ff5dfd-de16-4b3f-8109-d94eb9704ae7 is in state SUCCESS 2026-04-04 01:03:04.113133 | orchestrator | 2026-04-04 01:03:04.113170 | orchestrator | 2026-04-04 01:03:04.113178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:03:04.113185 | orchestrator | 2026-04-04 01:03:04.113191 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:03:04.113197 | orchestrator | Saturday 04 April 2026 01:00:06 +0000 (0:00:00.420) 0:00:00.420 ******** 2026-04-04 01:03:04.113203 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:03:04.113210 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:03:04.113215 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:03:04.113239 | orchestrator | 2026-04-04 01:03:04.113245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:03:04.113250 | orchestrator | Saturday 04 April 2026 01:00:06 +0000 (0:00:00.325) 0:00:00.745 ******** 2026-04-04 01:03:04.113256 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-04 01:03:04.113262 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-04 01:03:04.113268 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-04 01:03:04.113274 | orchestrator | 2026-04-04 01:03:04.113280 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-04 01:03:04.113286 | orchestrator | 2026-04-04 01:03:04.113292 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:03:04.113298 | orchestrator | Saturday 04 April 2026 01:00:06 +0000 (0:00:00.368) 0:00:01.114 ******** 2026-04-04 01:03:04.113303 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:04.113310 | orchestrator | 2026-04-04 01:03:04.113316 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-04-04 01:03:04.113321 | orchestrator | Saturday 04 April 2026 01:00:07 +0000 (0:00:00.777) 0:00:01.892 ******** 2026-04-04 01:03:04.113327 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-04-04 01:03:04.113333 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-04 01:03:04.113339 | orchestrator | 2026-04-04 01:03:04.113345 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-04-04 01:03:04.113351 | orchestrator | Saturday 04 April 2026 01:00:14 +0000 (0:00:06.627) 0:00:08.519 ******** 2026-04-04 01:03:04.113357 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-04-04 01:03:04.113363 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-04-04 01:03:04.113369 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-04 01:03:04.113375 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-04 01:03:04.113381 | orchestrator | 2026-04-04 01:03:04.113386 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-04 01:03:04.113392 | orchestrator | Saturday 04 April 2026 01:00:28 +0000 (0:00:14.263) 0:00:22.783 ******** 2026-04-04 01:03:04.113398 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:03:04.113404 | orchestrator | 2026-04-04 01:03:04.113409 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-04 01:03:04.113415 | orchestrator | Saturday 04 April 2026 01:00:31 +0000 (0:00:02.944) 0:00:25.728 ******** 2026-04-04 01:03:04.113421 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-04 01:03:04.113427 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:03:04.113433 | orchestrator | 2026-04-04 01:03:04.113485 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-04 01:03:04.113492 | orchestrator | Saturday 04 April 2026 01:00:35 +0000 (0:00:04.153) 0:00:29.881 ******** 2026-04-04 01:03:04.113498 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:03:04.113504 | orchestrator | 2026-04-04 01:03:04.113509 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-04-04 01:03:04.113515 | orchestrator | Saturday 04 April 2026 01:00:39 +0000 (0:00:03.954) 0:00:33.836 ******** 2026-04-04 01:03:04.113519 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-04 01:03:04.113524 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-04 01:03:04.113530 | orchestrator | 2026-04-04 01:03:04.114180 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-04 01:03:04.114194 | orchestrator | Saturday 04 April 2026 01:00:47 +0000 (0:00:07.846) 0:00:41.683 ******** 2026-04-04 01:03:04.114224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.114232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.114244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.114253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.114313 | orchestrator | 2026-04-04 01:03:04.114318 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:03:04.114323 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:03.774) 0:00:45.457 ******** 2026-04-04 01:03:04.114328 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.114333 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.114338 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.114342 | orchestrator | 2026-04-04 01:03:04.114347 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:03:04.114351 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:00.544) 0:00:46.002 ******** 2026-04-04 01:03:04.114357 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:04.114361 | orchestrator | 2026-04-04 01:03:04.114366 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-04 01:03:04.114371 | orchestrator | Saturday 04 April 2026 01:00:52 +0000 (0:00:00.674) 0:00:46.677 ******** 2026-04-04 01:03:04.114377 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-04 01:03:04.114383 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-04 01:03:04.114389 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-04 01:03:04.114394 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-04 01:03:04.114400 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-04 01:03:04.114405 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-04 01:03:04.114411 | orchestrator | 2026-04-04 01:03:04.114416 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-04 01:03:04.114422 | orchestrator | Saturday 04 April 2026 01:00:54 +0000 (0:00:02.426) 0:00:49.103 ******** 2026-04-04 01:03:04.114428 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114441 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114451 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114456 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114463 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114475 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114482 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114492 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114498 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114504 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114516 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-04 01:03:04.114527 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-04 01:03:04.114532 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114538 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114551 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114559 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114568 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114574 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114580 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114590 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114897 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-04 01:03:04.114932 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114939 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114945 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-04 01:03:04.114956 | orchestrator | 2026-04-04 01:03:04.114963 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-04 01:03:04.114969 | orchestrator | Saturday 04 April 2026 01:01:00 +0000 (0:00:06.040) 0:00:55.144 ******** 2026-04-04 01:03:04.114976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.114982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.114988 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.114993 | orchestrator | 2026-04-04 01:03:04.114999 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-04 01:03:04.115004 | orchestrator | Saturday 04 April 2026 01:01:02 +0000 (0:00:01.678) 0:00:56.822 ******** 2026-04-04 01:03:04.115013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.115018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.115024 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-04 01:03:04.115030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-04 01:03:04.115035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-04 01:03:04.115041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-04 01:03:04.115046 | orchestrator | 2026-04-04 01:03:04.115050 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-04 01:03:04.115056 | orchestrator | Saturday 04 April 2026 01:01:05 +0000 (0:00:03.193) 0:01:00.016 ******** 2026-04-04 01:03:04.115076 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-04 01:03:04.115082 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-04 01:03:04.115107 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-04 01:03:04.115112 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-04 01:03:04.115117 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-04 01:03:04.115122 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-04 01:03:04.115126 | orchestrator | 2026-04-04 01:03:04.115131 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-04 01:03:04.115136 | orchestrator | Saturday 04 April 2026 01:01:07 +0000 (0:00:01.350) 0:01:01.369 ******** 2026-04-04 01:03:04.115141 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.115147 | orchestrator | 2026-04-04 01:03:04.115152 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-04 01:03:04.115163 | orchestrator | Saturday 04 April 2026 01:01:07 +0000 (0:00:00.315) 0:01:01.685 ******** 2026-04-04 01:03:04.115169 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.115175 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.115180 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.115184 | orchestrator | 2026-04-04 01:03:04.115189 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:03:04.115193 | orchestrator | Saturday 04 April 2026 01:01:07 +0000 (0:00:00.300) 0:01:01.986 ******** 2026-04-04 01:03:04.115198 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:04.115204 | orchestrator | 2026-04-04 01:03:04.115208 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-04 01:03:04.115213 | orchestrator | Saturday 04 April 2026 01:01:08 +0000 (0:00:00.838) 0:01:02.824 ******** 2026-04-04 01:03:04.115219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115338 | orchestrator | 2026-04-04 01:03:04.115344 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-04 01:03:04.115349 | orchestrator | Saturday 04 April 2026 01:01:13 +0000 (0:00:05.297) 0:01:08.121 ******** 2026-04-04 01:03:04.115357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115397 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.115403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115450 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.115456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115484 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.115494 | orchestrator | 2026-04-04 01:03:04.115499 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-04 01:03:04.115504 | orchestrator | Saturday 04 April 2026 01:01:14 +0000 (0:00:01.114) 0:01:09.236 ******** 2026-04-04 01:03:04.115513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115535 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.115544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115581 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.115587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.115595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.115623 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.115629 | orchestrator | 2026-04-04 01:03:04.115636 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-04 01:03:04.115642 | orchestrator | Saturday 04 April 2026 01:01:15 +0000 (0:00:01.000) 0:01:10.237 ******** 2026-04-04 01:03:04.115648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115772 | orchestrator | 2026-04-04 01:03:04.115778 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-04 01:03:04.115784 | orchestrator | Saturday 04 April 2026 01:01:19 +0000 (0:00:03.972) 0:01:14.210 ******** 2026-04-04 01:03:04.115791 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-04 01:03:04.115799 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.115805 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-04 01:03:04.115811 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.115817 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-04 01:03:04.115827 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.115833 | orchestrator | 2026-04-04 01:03:04.115839 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-04 01:03:04.115845 | orchestrator | Saturday 04 April 2026 01:01:20 +0000 (0:00:00.756) 0:01:14.967 ******** 2026-04-04 01:03:04.115850 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:04.115856 | orchestrator | 2026-04-04 01:03:04.115862 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-04 01:03:04.115868 | orchestrator | Saturday 04 April 2026 01:01:21 +0000 (0:00:00.748) 0:01:15.715 ******** 2026-04-04 01:03:04.115874 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.115879 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.115885 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.115891 | orchestrator | 2026-04-04 01:03:04.115900 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-04 01:03:04.115906 | orchestrator | Saturday 04 April 2026 01:01:23 +0000 (0:00:02.115) 0:01:17.831 ******** 2026-04-04 01:03:04.115917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.115941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.115990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116014 | orchestrator | 2026-04-04 01:03:04.116022 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-04 01:03:04.116029 | orchestrator | Saturday 04 April 2026 01:01:34 +0000 (0:00:10.613) 0:01:28.444 ******** 2026-04-04 01:03:04.116035 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116041 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116047 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116052 | orchestrator | 2026-04-04 01:03:04.116059 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-04 01:03:04.116065 | orchestrator | Saturday 04 April 2026 01:01:35 +0000 (0:00:01.270) 0:01:29.715 ******** 2026-04-04 01:03:04.116071 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116077 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116083 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116119 | orchestrator | 2026-04-04 01:03:04.116127 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-04 01:03:04.116133 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:01.539) 0:01:31.255 ******** 2026-04-04 01:03:04.116139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116170 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.116181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116206 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.116215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116249 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.116254 | orchestrator | 2026-04-04 01:03:04.116260 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-04 01:03:04.116267 | orchestrator | Saturday 04 April 2026 01:01:37 +0000 (0:00:00.805) 0:01:32.060 ******** 2026-04-04 01:03:04.116272 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.116278 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.116284 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.116290 | orchestrator | 2026-04-04 01:03:04.116295 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-04 01:03:04.116301 | orchestrator | Saturday 04 April 2026 01:01:38 +0000 (0:00:00.287) 0:01:32.348 ******** 2026-04-04 01:03:04.116310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.116321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.116328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:03:04.116338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:04.116408 | orchestrator | 2026-04-04 01:03:04.116414 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-04 01:03:04.116420 | orchestrator | Saturday 04 April 2026 01:01:41 +0000 (0:00:03.485) 0:01:35.834 ******** 2026-04-04 01:03:04.116426 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:03:04.116432 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:03:04.116438 | orchestrator | } 2026-04-04 01:03:04.116444 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:03:04.116450 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:03:04.116456 | orchestrator | } 2026-04-04 01:03:04.116462 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:03:04.116468 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:03:04.116474 | orchestrator | } 2026-04-04 01:03:04.116480 | orchestrator | 2026-04-04 01:03:04.116485 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:03:04.116495 | orchestrator | Saturday 04 April 2026 01:01:41 +0000 (0:00:00.298) 0:01:36.132 ******** 2026-04-04 01:03:04.116506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116528 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.116537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116570 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.116576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:03:04.116585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:03:04.116614 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.116621 | orchestrator | 2026-04-04 01:03:04.116626 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:03:04.116633 | orchestrator | Saturday 04 April 2026 01:01:42 +0000 (0:00:01.051) 0:01:37.183 ******** 2026-04-04 01:03:04.116639 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.116645 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:04.116651 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:04.116657 | orchestrator | 2026-04-04 01:03:04.116663 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-04 01:03:04.116670 | orchestrator | Saturday 04 April 2026 01:01:43 +0000 (0:00:00.266) 0:01:37.450 ******** 2026-04-04 01:03:04.116675 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116681 | orchestrator | 2026-04-04 01:03:04.116687 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-04 01:03:04.116693 | orchestrator | Saturday 04 April 2026 01:01:45 +0000 (0:00:02.001) 0:01:39.452 ******** 2026-04-04 01:03:04.116699 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116705 | orchestrator | 2026-04-04 01:03:04.116711 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-04 01:03:04.116717 | orchestrator | Saturday 04 April 2026 01:01:47 +0000 (0:00:02.293) 0:01:41.745 ******** 2026-04-04 01:03:04.116723 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116729 | orchestrator | 2026-04-04 01:03:04.116735 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:03:04.116741 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:17.974) 0:01:59.720 ******** 2026-04-04 01:03:04.116747 | orchestrator | 2026-04-04 01:03:04.116753 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:03:04.116759 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:00.100) 0:01:59.820 ******** 2026-04-04 01:03:04.116765 | orchestrator | 2026-04-04 01:03:04.116771 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:03:04.116777 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:00.100) 0:01:59.921 ******** 2026-04-04 01:03:04.116783 | orchestrator | 2026-04-04 01:03:04.116788 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-04 01:03:04.116794 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:00.190) 0:02:00.112 ******** 2026-04-04 01:03:04.116800 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116810 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116816 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116822 | orchestrator | 2026-04-04 01:03:04.116828 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-04 01:03:04.116834 | orchestrator | Saturday 04 April 2026 01:02:24 +0000 (0:00:18.508) 0:02:18.621 ******** 2026-04-04 01:03:04.116840 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116846 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116851 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116857 | orchestrator | 2026-04-04 01:03:04.116863 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-04 01:03:04.116869 | orchestrator | Saturday 04 April 2026 01:02:33 +0000 (0:00:09.313) 0:02:27.934 ******** 2026-04-04 01:03:04.116875 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116884 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116890 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116896 | orchestrator | 2026-04-04 01:03:04.116902 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-04 01:03:04.116908 | orchestrator | Saturday 04 April 2026 01:02:53 +0000 (0:00:19.611) 0:02:47.546 ******** 2026-04-04 01:03:04.116914 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:04.116920 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:04.116926 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:04.116932 | orchestrator | 2026-04-04 01:03:04.116938 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-04 01:03:04.116944 | orchestrator | Saturday 04 April 2026 01:03:03 +0000 (0:00:10.213) 0:02:57.759 ******** 2026-04-04 01:03:04.116949 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:04.116955 | orchestrator | 2026-04-04 01:03:04.116961 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:03:04.116968 | orchestrator | testbed-node-0 : ok=33  changed=24  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 01:03:04.116975 | orchestrator | testbed-node-1 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:03:04.116984 | orchestrator | testbed-node-2 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:03:04.116991 | orchestrator | 2026-04-04 01:03:04.116996 | orchestrator | 2026-04-04 01:03:04.117002 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:03:04.117008 | orchestrator | Saturday 04 April 2026 01:03:03 +0000 (0:00:00.324) 0:02:58.084 ******** 2026-04-04 01:03:04.117014 | orchestrator | =============================================================================== 2026-04-04 01:03:04.117020 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.61s 2026-04-04 01:03:04.117026 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.51s 2026-04-04 01:03:04.117032 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.97s 2026-04-04 01:03:04.117038 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 14.26s 2026-04-04 01:03:04.117043 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.61s 2026-04-04 01:03:04.117049 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.21s 2026-04-04 01:03:04.117055 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.31s 2026-04-04 01:03:04.117061 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.85s 2026-04-04 01:03:04.117067 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 6.63s 2026-04-04 01:03:04.117073 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.04s 2026-04-04 01:03:04.117079 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.30s 2026-04-04 01:03:04.117103 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2026-04-04 01:03:04.117110 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.97s 2026-04-04 01:03:04.117116 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.95s 2026-04-04 01:03:04.117122 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.77s 2026-04-04 01:03:04.117129 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.49s 2026-04-04 01:03:04.117135 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.19s 2026-04-04 01:03:04.117141 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.94s 2026-04-04 01:03:04.117147 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.43s 2026-04-04 01:03:04.117153 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.29s 2026-04-04 01:03:04.117159 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:04.117165 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:04.117171 | orchestrator | 2026-04-04 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:07.137685 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:07.138217 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:07.138850 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:07.139538 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:07.139603 | orchestrator | 2026-04-04 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:10.163466 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:10.165537 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:10.166905 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:10.167312 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:10.167340 | orchestrator | 2026-04-04 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:13.204060 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:13.204715 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:13.205785 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:13.207370 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:13.207403 | orchestrator | 2026-04-04 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:16.229451 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:16.230235 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:16.230275 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:16.230875 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:16.232228 | orchestrator | 2026-04-04 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:19.260971 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:19.261407 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:19.262211 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:19.262994 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:19.263025 | orchestrator | 2026-04-04 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:22.286720 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:22.288360 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:22.290433 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:22.292270 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:22.292304 | orchestrator | 2026-04-04 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:25.325152 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:25.325649 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:25.326456 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:25.328145 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:25.328177 | orchestrator | 2026-04-04 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:28.358954 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:28.359296 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:28.360025 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:28.360644 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:28.360717 | orchestrator | 2026-04-04 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:31.387478 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:31.387902 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:31.388644 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:31.389246 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:31.389275 | orchestrator | 2026-04-04 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:34.414993 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:34.415451 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:34.415997 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:34.416724 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:34.416750 | orchestrator | 2026-04-04 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:37.453353 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:37.453659 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:37.454425 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:37.455085 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:37.455109 | orchestrator | 2026-04-04 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:40.497562 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:40.499682 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:40.500710 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:40.501822 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:40.501857 | orchestrator | 2026-04-04 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:43.532970 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:43.533278 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:43.536584 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:43.537187 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:43.537218 | orchestrator | 2026-04-04 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:46.575571 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:46.576990 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:46.579457 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:46.581727 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:46.581811 | orchestrator | 2026-04-04 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:49.620719 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:49.622660 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:49.625609 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:49.627624 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:49.627705 | orchestrator | 2026-04-04 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:52.670718 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:52.670794 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:52.670800 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:52.670811 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:52.670823 | orchestrator | 2026-04-04 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:55.691193 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:55.692655 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:55.693363 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:55.694808 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:55.694848 | orchestrator | 2026-04-04 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:58.723558 | orchestrator | 2026-04-04 01:03:58 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:03:58.725371 | orchestrator | 2026-04-04 01:03:58 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:03:58.725413 | orchestrator | 2026-04-04 01:03:58 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:03:58.726501 | orchestrator | 2026-04-04 01:03:58 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:03:58.726546 | orchestrator | 2026-04-04 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:01.757317 | orchestrator | 2026-04-04 01:04:01 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:04:01.758964 | orchestrator | 2026-04-04 01:04:01 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:01.760358 | orchestrator | 2026-04-04 01:04:01 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:01.761522 | orchestrator | 2026-04-04 01:04:01 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:01.761744 | orchestrator | 2026-04-04 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:04.791917 | orchestrator | 2026-04-04 01:04:04 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:04:04.792659 | orchestrator | 2026-04-04 01:04:04 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:04.794116 | orchestrator | 2026-04-04 01:04:04 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:04.795941 | orchestrator | 2026-04-04 01:04:04 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:04.795983 | orchestrator | 2026-04-04 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:07.821429 | orchestrator | 2026-04-04 01:04:07 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state STARTED 2026-04-04 01:04:07.822187 | orchestrator | 2026-04-04 01:04:07 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:07.823022 | orchestrator | 2026-04-04 01:04:07 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:07.823938 | orchestrator | 2026-04-04 01:04:07 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:07.824042 | orchestrator | 2026-04-04 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:10.848381 | orchestrator | 2026-04-04 01:04:10 | INFO  | Task f7bf065f-e6cb-4ddd-8926-2205affa0335 is in state SUCCESS 2026-04-04 01:04:10.851872 | orchestrator | 2026-04-04 01:04:10.851917 | orchestrator | 2026-04-04 01:04:10.851922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:04:10.851926 | orchestrator | 2026-04-04 01:04:10.851929 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:04:10.851933 | orchestrator | Saturday 04 April 2026 01:02:30 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-04-04 01:04:10.851936 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:10.851941 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:10.851947 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:10.851952 | orchestrator | 2026-04-04 01:04:10.851957 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:04:10.851962 | orchestrator | Saturday 04 April 2026 01:02:30 +0000 (0:00:00.203) 0:00:00.427 ******** 2026-04-04 01:04:10.851967 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-04 01:04:10.851973 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-04 01:04:10.851978 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-04 01:04:10.851983 | orchestrator | 2026-04-04 01:04:10.852068 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-04 01:04:10.852072 | orchestrator | 2026-04-04 01:04:10.852076 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:04:10.852086 | orchestrator | Saturday 04 April 2026 01:02:30 +0000 (0:00:00.360) 0:00:00.787 ******** 2026-04-04 01:04:10.852090 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:04:10.852094 | orchestrator | 2026-04-04 01:04:10.852097 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-04 01:04:10.852100 | orchestrator | Saturday 04 April 2026 01:02:31 +0000 (0:00:00.668) 0:00:01.456 ******** 2026-04-04 01:04:10.852104 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-04 01:04:10.852107 | orchestrator | 2026-04-04 01:04:10.852110 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-04-04 01:04:10.852113 | orchestrator | Saturday 04 April 2026 01:02:35 +0000 (0:00:03.661) 0:00:05.118 ******** 2026-04-04 01:04:10.852116 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-04 01:04:10.852143 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-04 01:04:10.852147 | orchestrator | 2026-04-04 01:04:10.852151 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-04 01:04:10.852154 | orchestrator | Saturday 04 April 2026 01:02:41 +0000 (0:00:06.358) 0:00:11.477 ******** 2026-04-04 01:04:10.852157 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:04:10.852160 | orchestrator | 2026-04-04 01:04:10.852163 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-04 01:04:10.852166 | orchestrator | Saturday 04 April 2026 01:02:44 +0000 (0:00:03.232) 0:00:14.709 ******** 2026-04-04 01:04:10.852169 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-04 01:04:10.852172 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:04:10.852175 | orchestrator | 2026-04-04 01:04:10.852178 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-04 01:04:10.852181 | orchestrator | Saturday 04 April 2026 01:02:48 +0000 (0:00:03.578) 0:00:18.287 ******** 2026-04-04 01:04:10.852185 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:04:10.852188 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-04 01:04:10.852191 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-04 01:04:10.852205 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-04 01:04:10.852208 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-04 01:04:10.852211 | orchestrator | 2026-04-04 01:04:10.852215 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-04-04 01:04:10.852218 | orchestrator | Saturday 04 April 2026 01:03:03 +0000 (0:00:15.509) 0:00:33.796 ******** 2026-04-04 01:04:10.852221 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-04 01:04:10.852225 | orchestrator | 2026-04-04 01:04:10.852231 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-04 01:04:10.852236 | orchestrator | Saturday 04 April 2026 01:03:07 +0000 (0:00:03.797) 0:00:37.594 ******** 2026-04-04 01:04:10.852405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852502 | orchestrator | 2026-04-04 01:04:10.852506 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-04 01:04:10.852509 | orchestrator | Saturday 04 April 2026 01:03:09 +0000 (0:00:02.032) 0:00:39.627 ******** 2026-04-04 01:04:10.852515 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-04 01:04:10.852519 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-04 01:04:10.852522 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-04 01:04:10.852525 | orchestrator | 2026-04-04 01:04:10.852528 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-04 01:04:10.852531 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:01.532) 0:00:41.159 ******** 2026-04-04 01:04:10.852534 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.852537 | orchestrator | 2026-04-04 01:04:10.852540 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-04 01:04:10.852544 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:00.103) 0:00:41.263 ******** 2026-04-04 01:04:10.852547 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.852550 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.852553 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.852556 | orchestrator | 2026-04-04 01:04:10.852559 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:04:10.852562 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:00.301) 0:00:41.564 ******** 2026-04-04 01:04:10.852565 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:04:10.852568 | orchestrator | 2026-04-04 01:04:10.852571 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-04 01:04:10.852574 | orchestrator | Saturday 04 April 2026 01:03:12 +0000 (0:00:00.906) 0:00:42.471 ******** 2026-04-04 01:04:10.852578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852625 | orchestrator | 2026-04-04 01:04:10.852629 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-04 01:04:10.852632 | orchestrator | Saturday 04 April 2026 01:03:16 +0000 (0:00:03.932) 0:00:46.403 ******** 2026-04-04 01:04:10.852635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852648 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.852653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852666 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.852669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852684 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.852687 | orchestrator | 2026-04-04 01:04:10.852690 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-04 01:04:10.852693 | orchestrator | Saturday 04 April 2026 01:03:17 +0000 (0:00:00.971) 0:00:47.374 ******** 2026-04-04 01:04:10.852698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852709 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.852725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852744 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.852747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852757 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.852760 | orchestrator | 2026-04-04 01:04:10.852763 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-04 01:04:10.852766 | orchestrator | Saturday 04 April 2026 01:03:18 +0000 (0:00:00.785) 0:00:48.160 ******** 2026-04-04 01:04:10.852791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852835 | orchestrator | 2026-04-04 01:04:10.852839 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-04 01:04:10.852842 | orchestrator | Saturday 04 April 2026 01:03:21 +0000 (0:00:03.489) 0:00:51.649 ******** 2026-04-04 01:04:10.852845 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.852848 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:10.852851 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:10.852854 | orchestrator | 2026-04-04 01:04:10.852857 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-04 01:04:10.852861 | orchestrator | Saturday 04 April 2026 01:03:23 +0000 (0:00:01.542) 0:00:53.192 ******** 2026-04-04 01:04:10.852864 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:04:10.852867 | orchestrator | 2026-04-04 01:04:10.852870 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-04 01:04:10.852873 | orchestrator | Saturday 04 April 2026 01:03:24 +0000 (0:00:01.170) 0:00:54.362 ******** 2026-04-04 01:04:10.852876 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.852879 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.852882 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.852888 | orchestrator | 2026-04-04 01:04:10.852891 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-04 01:04:10.852894 | orchestrator | Saturday 04 April 2026 01:03:25 +0000 (0:00:00.820) 0:00:55.183 ******** 2026-04-04 01:04:10.852899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.852912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.852939 | orchestrator | 2026-04-04 01:04:10.852945 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-04 01:04:10.852949 | orchestrator | Saturday 04 April 2026 01:03:32 +0000 (0:00:07.184) 0:01:02.368 ******** 2026-04-04 01:04:10.852955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.852966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.852978 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.853025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.853029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853046 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.853049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.853064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853078 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.853083 | orchestrator | 2026-04-04 01:04:10.853088 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-04 01:04:10.853093 | orchestrator | Saturday 04 April 2026 01:03:33 +0000 (0:00:01.112) 0:01:03.480 ******** 2026-04-04 01:04:10.853098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.853104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.853118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:04:10.853129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:04:10.853169 | orchestrator | 2026-04-04 01:04:10.853175 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-04 01:04:10.853180 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:02.907) 0:01:06.387 ******** 2026-04-04 01:04:10.853185 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:04:10.853191 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:04:10.853196 | orchestrator | } 2026-04-04 01:04:10.853201 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:04:10.853206 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:04:10.853212 | orchestrator | } 2026-04-04 01:04:10.853217 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:04:10.853222 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:04:10.853228 | orchestrator | } 2026-04-04 01:04:10.853232 | orchestrator | 2026-04-04 01:04:10.853237 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:04:10.853242 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:00.273) 0:01:06.661 ******** 2026-04-04 01:04:10.853250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.853256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853270 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.853278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.853284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853296 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.853301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:04:10.853310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:04:10.853322 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.853327 | orchestrator | 2026-04-04 01:04:10.853332 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:04:10.853337 | orchestrator | Saturday 04 April 2026 01:03:37 +0000 (0:00:00.901) 0:01:07.562 ******** 2026-04-04 01:04:10.853342 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:10.853347 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:10.853352 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:10.853358 | orchestrator | 2026-04-04 01:04:10.853363 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-04 01:04:10.853371 | orchestrator | Saturday 04 April 2026 01:03:37 +0000 (0:00:00.259) 0:01:07.822 ******** 2026-04-04 01:04:10.853376 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853381 | orchestrator | 2026-04-04 01:04:10.853386 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-04 01:04:10.853392 | orchestrator | Saturday 04 April 2026 01:03:39 +0000 (0:00:02.022) 0:01:09.844 ******** 2026-04-04 01:04:10.853397 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853403 | orchestrator | 2026-04-04 01:04:10.853407 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-04 01:04:10.853413 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:02.150) 0:01:11.995 ******** 2026-04-04 01:04:10.853418 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853423 | orchestrator | 2026-04-04 01:04:10.853428 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:04:10.853434 | orchestrator | Saturday 04 April 2026 01:03:53 +0000 (0:00:11.753) 0:01:23.748 ******** 2026-04-04 01:04:10.853439 | orchestrator | 2026-04-04 01:04:10.853444 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:04:10.853449 | orchestrator | Saturday 04 April 2026 01:03:53 +0000 (0:00:00.119) 0:01:23.868 ******** 2026-04-04 01:04:10.853455 | orchestrator | 2026-04-04 01:04:10.853464 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:04:10.853472 | orchestrator | Saturday 04 April 2026 01:03:53 +0000 (0:00:00.103) 0:01:23.972 ******** 2026-04-04 01:04:10.853478 | orchestrator | 2026-04-04 01:04:10.853483 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-04 01:04:10.853489 | orchestrator | Saturday 04 April 2026 01:03:54 +0000 (0:00:00.095) 0:01:24.067 ******** 2026-04-04 01:04:10.853494 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853500 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:10.853505 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:10.853510 | orchestrator | 2026-04-04 01:04:10.853516 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-04 01:04:10.853521 | orchestrator | Saturday 04 April 2026 01:03:59 +0000 (0:00:05.673) 0:01:29.741 ******** 2026-04-04 01:04:10.853526 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853532 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:10.853537 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:10.853542 | orchestrator | 2026-04-04 01:04:10.853548 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-04 01:04:10.853553 | orchestrator | Saturday 04 April 2026 01:04:05 +0000 (0:00:05.498) 0:01:35.240 ******** 2026-04-04 01:04:10.853559 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:10.853564 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:10.853571 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:10.853576 | orchestrator | 2026-04-04 01:04:10.853585 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:04:10.853595 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 01:04:10.853605 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:04:10.853610 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:04:10.853616 | orchestrator | 2026-04-04 01:04:10.853621 | orchestrator | 2026-04-04 01:04:10.853626 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:04:10.853632 | orchestrator | Saturday 04 April 2026 01:04:10 +0000 (0:00:05.201) 0:01:40.441 ******** 2026-04-04 01:04:10.853637 | orchestrator | =============================================================================== 2026-04-04 01:04:10.853643 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.51s 2026-04-04 01:04:10.853648 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.75s 2026-04-04 01:04:10.853653 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.18s 2026-04-04 01:04:10.853658 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.36s 2026-04-04 01:04:10.853663 | orchestrator | barbican : Restart barbican-api container ------------------------------- 5.67s 2026-04-04 01:04:10.853669 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.50s 2026-04-04 01:04:10.853674 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.20s 2026-04-04 01:04:10.853680 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.93s 2026-04-04 01:04:10.853685 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 3.80s 2026-04-04 01:04:10.853690 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 3.66s 2026-04-04 01:04:10.853696 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.58s 2026-04-04 01:04:10.853701 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.49s 2026-04-04 01:04:10.853706 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.23s 2026-04-04 01:04:10.853715 | orchestrator | service-check-containers : barbican | Check containers ------------------ 2.91s 2026-04-04 01:04:10.853729 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.15s 2026-04-04 01:04:10.853740 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.03s 2026-04-04 01:04:10.853745 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.02s 2026-04-04 01:04:10.853750 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.54s 2026-04-04 01:04:10.853756 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.53s 2026-04-04 01:04:10.853764 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.17s 2026-04-04 01:04:10.853770 | orchestrator | 2026-04-04 01:04:10 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:10.853785 | orchestrator | 2026-04-04 01:04:10 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:10.854490 | orchestrator | 2026-04-04 01:04:10 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:10.854527 | orchestrator | 2026-04-04 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:13.877690 | orchestrator | 2026-04-04 01:04:13 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:13.879730 | orchestrator | 2026-04-04 01:04:13 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:13.880217 | orchestrator | 2026-04-04 01:04:13 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:13.880892 | orchestrator | 2026-04-04 01:04:13 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:13.880954 | orchestrator | 2026-04-04 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:16.907587 | orchestrator | 2026-04-04 01:04:16 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:16.910515 | orchestrator | 2026-04-04 01:04:16 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:16.912409 | orchestrator | 2026-04-04 01:04:16 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:16.914304 | orchestrator | 2026-04-04 01:04:16 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:16.914597 | orchestrator | 2026-04-04 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:19.989959 | orchestrator | 2026-04-04 01:04:19 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:19.990053 | orchestrator | 2026-04-04 01:04:19 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:19.990114 | orchestrator | 2026-04-04 01:04:19 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:19.991895 | orchestrator | 2026-04-04 01:04:19 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:19.991948 | orchestrator | 2026-04-04 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:23.019487 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:23.020186 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:23.021235 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:23.023709 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:23.023777 | orchestrator | 2026-04-04 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:26.067149 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:26.067217 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:26.067230 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:26.067240 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:26.067250 | orchestrator | 2026-04-04 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:29.104408 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:29.106124 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:29.107327 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:29.109504 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:29.109545 | orchestrator | 2026-04-04 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:32.154180 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:32.156540 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:32.159869 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:32.163095 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:32.163146 | orchestrator | 2026-04-04 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:35.207745 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:35.208593 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:35.209568 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:35.210514 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:35.210536 | orchestrator | 2026-04-04 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:38.255272 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:38.256745 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:38.258447 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:38.260072 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:38.260121 | orchestrator | 2026-04-04 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:41.307612 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:41.309175 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:41.311639 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:41.313752 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:41.314032 | orchestrator | 2026-04-04 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:44.360573 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:44.360634 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:44.361226 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:44.361931 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:44.361982 | orchestrator | 2026-04-04 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:47.391552 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:47.391806 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:47.392723 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:47.393590 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:47.393622 | orchestrator | 2026-04-04 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:50.415643 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:50.416051 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:50.416609 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:50.418488 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:50.418526 | orchestrator | 2026-04-04 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:53.454182 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:53.454890 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:53.455698 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:53.457744 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:53.457797 | orchestrator | 2026-04-04 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:56.488957 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:56.490141 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:56.492486 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:56.494601 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:56.494670 | orchestrator | 2026-04-04 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:59.534384 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:04:59.535742 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:04:59.536436 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:04:59.537688 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:04:59.537724 | orchestrator | 2026-04-04 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:02.578428 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:02.581576 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:02.583305 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:02.585079 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:02.585135 | orchestrator | 2026-04-04 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:05.635595 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:05.638144 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:05.641107 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:05.642747 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:05.642895 | orchestrator | 2026-04-04 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:08.693366 | orchestrator | 2026-04-04 01:05:08 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:08.698125 | orchestrator | 2026-04-04 01:05:08 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:08.700435 | orchestrator | 2026-04-04 01:05:08 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:08.703432 | orchestrator | 2026-04-04 01:05:08 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:08.703805 | orchestrator | 2026-04-04 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:11.746593 | orchestrator | 2026-04-04 01:05:11 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:11.747762 | orchestrator | 2026-04-04 01:05:11 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:11.748427 | orchestrator | 2026-04-04 01:05:11 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:11.750347 | orchestrator | 2026-04-04 01:05:11 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:11.750418 | orchestrator | 2026-04-04 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:14.791557 | orchestrator | 2026-04-04 01:05:14 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:14.791607 | orchestrator | 2026-04-04 01:05:14 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:14.793965 | orchestrator | 2026-04-04 01:05:14 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:14.794648 | orchestrator | 2026-04-04 01:05:14 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:14.794790 | orchestrator | 2026-04-04 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:17.845077 | orchestrator | 2026-04-04 01:05:17 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:17.847244 | orchestrator | 2026-04-04 01:05:17 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:17.849923 | orchestrator | 2026-04-04 01:05:17 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:17.852303 | orchestrator | 2026-04-04 01:05:17 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:17.852352 | orchestrator | 2026-04-04 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:20.894854 | orchestrator | 2026-04-04 01:05:20 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:20.896080 | orchestrator | 2026-04-04 01:05:20 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:20.897547 | orchestrator | 2026-04-04 01:05:20 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:20.899999 | orchestrator | 2026-04-04 01:05:20 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:20.900058 | orchestrator | 2026-04-04 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:23.941680 | orchestrator | 2026-04-04 01:05:23 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:23.942220 | orchestrator | 2026-04-04 01:05:23 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:23.944249 | orchestrator | 2026-04-04 01:05:23 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:23.944782 | orchestrator | 2026-04-04 01:05:23 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:23.944815 | orchestrator | 2026-04-04 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:26.993020 | orchestrator | 2026-04-04 01:05:26 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:26.994951 | orchestrator | 2026-04-04 01:05:26 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:26.996076 | orchestrator | 2026-04-04 01:05:26 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:26.996537 | orchestrator | 2026-04-04 01:05:26 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:26.996601 | orchestrator | 2026-04-04 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:30.033768 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state STARTED 2026-04-04 01:05:30.036034 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:30.037398 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:30.039356 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:30.039404 | orchestrator | 2026-04-04 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:33.097504 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 649651f2-55af-45ff-b44d-c0185fbcea73 is in state SUCCESS 2026-04-04 01:05:33.097576 | orchestrator | 2026-04-04 01:05:33.099138 | orchestrator | 2026-04-04 01:05:33.099184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:05:33.099197 | orchestrator | 2026-04-04 01:05:33.099208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:05:33.099238 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-04-04 01:05:33.099266 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:33.099278 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:33.099286 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:33.099296 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:05:33.099305 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:05:33.099314 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:05:33.099323 | orchestrator | 2026-04-04 01:05:33.099333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:05:33.099343 | orchestrator | Saturday 04 April 2026 01:01:37 +0000 (0:00:00.604) 0:00:00.874 ******** 2026-04-04 01:05:33.099351 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-04 01:05:33.099361 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-04 01:05:33.099455 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-04 01:05:33.099466 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-04 01:05:33.099475 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-04 01:05:33.099485 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-04 01:05:33.099494 | orchestrator | 2026-04-04 01:05:33.099504 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-04 01:05:33.099514 | orchestrator | 2026-04-04 01:05:33.099524 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:05:33.099559 | orchestrator | Saturday 04 April 2026 01:01:38 +0000 (0:00:00.590) 0:00:01.465 ******** 2026-04-04 01:05:33.099816 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:05:33.099829 | orchestrator | 2026-04-04 01:05:33.099837 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-04 01:05:33.099846 | orchestrator | Saturday 04 April 2026 01:01:39 +0000 (0:00:00.979) 0:00:02.444 ******** 2026-04-04 01:05:33.099855 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:05:33.099943 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:33.099953 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:05:33.099973 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:33.099983 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:33.099992 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:05:33.100002 | orchestrator | 2026-04-04 01:05:33.100012 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-04 01:05:33.100021 | orchestrator | Saturday 04 April 2026 01:01:40 +0000 (0:00:01.607) 0:00:04.052 ******** 2026-04-04 01:05:33.100032 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:33.100041 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:33.100051 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:05:33.100348 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:33.100365 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:05:33.100375 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:05:33.100383 | orchestrator | 2026-04-04 01:05:33.100393 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-04 01:05:33.100403 | orchestrator | Saturday 04 April 2026 01:01:42 +0000 (0:00:01.297) 0:00:05.349 ******** 2026-04-04 01:05:33.100412 | orchestrator | ok: [testbed-node-0] => { 2026-04-04 01:05:33.100422 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100432 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100442 | orchestrator | } 2026-04-04 01:05:33.100452 | orchestrator | ok: [testbed-node-1] => { 2026-04-04 01:05:33.100461 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100471 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100481 | orchestrator | } 2026-04-04 01:05:33.100491 | orchestrator | ok: [testbed-node-2] => { 2026-04-04 01:05:33.100501 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100510 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100520 | orchestrator | } 2026-04-04 01:05:33.100542 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 01:05:33.100552 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100561 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100570 | orchestrator | } 2026-04-04 01:05:33.100580 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 01:05:33.100589 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100599 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100608 | orchestrator | } 2026-04-04 01:05:33.100617 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 01:05:33.100626 | orchestrator |  "changed": false, 2026-04-04 01:05:33.100637 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:05:33.100647 | orchestrator | } 2026-04-04 01:05:33.100656 | orchestrator | 2026-04-04 01:05:33.100666 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-04 01:05:33.100675 | orchestrator | Saturday 04 April 2026 01:01:42 +0000 (0:00:00.512) 0:00:05.862 ******** 2026-04-04 01:05:33.100684 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.100694 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.100704 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.100714 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.100724 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.100732 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.100741 | orchestrator | 2026-04-04 01:05:33.100750 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-04 01:05:33.100760 | orchestrator | Saturday 04 April 2026 01:01:43 +0000 (0:00:00.589) 0:00:06.452 ******** 2026-04-04 01:05:33.100770 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-04 01:05:33.100780 | orchestrator | 2026-04-04 01:05:33.100790 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-04-04 01:05:33.100800 | orchestrator | Saturday 04 April 2026 01:01:46 +0000 (0:00:03.541) 0:00:09.993 ******** 2026-04-04 01:05:33.100809 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-04 01:05:33.100819 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-04 01:05:33.100829 | orchestrator | 2026-04-04 01:05:33.100897 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-04 01:05:33.100908 | orchestrator | Saturday 04 April 2026 01:01:54 +0000 (0:00:07.590) 0:00:17.583 ******** 2026-04-04 01:05:33.100918 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:05:33.100927 | orchestrator | 2026-04-04 01:05:33.100993 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-04 01:05:33.101005 | orchestrator | Saturday 04 April 2026 01:01:57 +0000 (0:00:02.914) 0:00:20.497 ******** 2026-04-04 01:05:33.101015 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-04 01:05:33.101025 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:05:33.101035 | orchestrator | 2026-04-04 01:05:33.101044 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-04 01:05:33.101053 | orchestrator | Saturday 04 April 2026 01:02:01 +0000 (0:00:04.080) 0:00:24.578 ******** 2026-04-04 01:05:33.101062 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:05:33.101073 | orchestrator | 2026-04-04 01:05:33.101083 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-04-04 01:05:33.101093 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:03.316) 0:00:27.895 ******** 2026-04-04 01:05:33.101102 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-04 01:05:33.101113 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-04 01:05:33.101119 | orchestrator | 2026-04-04 01:05:33.101126 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:05:33.101133 | orchestrator | Saturday 04 April 2026 01:02:11 +0000 (0:00:07.163) 0:00:35.059 ******** 2026-04-04 01:05:33.101149 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.101155 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.101161 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.101168 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.101174 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.101181 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.101187 | orchestrator | 2026-04-04 01:05:33.101194 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-04 01:05:33.101201 | orchestrator | Saturday 04 April 2026 01:02:12 +0000 (0:00:00.518) 0:00:35.577 ******** 2026-04-04 01:05:33.101207 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.101214 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.101220 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.101233 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.101240 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.101246 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.101253 | orchestrator | 2026-04-04 01:05:33.101260 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-04 01:05:33.101266 | orchestrator | Saturday 04 April 2026 01:02:14 +0000 (0:00:01.861) 0:00:37.439 ******** 2026-04-04 01:05:33.101273 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:33.101279 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:33.101286 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:05:33.101292 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:33.101298 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:05:33.101305 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:05:33.101312 | orchestrator | 2026-04-04 01:05:33.101319 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-04 01:05:33.101325 | orchestrator | Saturday 04 April 2026 01:02:14 +0000 (0:00:00.881) 0:00:38.320 ******** 2026-04-04 01:05:33.101332 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.101338 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.101345 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.101351 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.101358 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.101364 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.101370 | orchestrator | 2026-04-04 01:05:33.101375 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-04 01:05:33.101381 | orchestrator | Saturday 04 April 2026 01:02:16 +0000 (0:00:01.657) 0:00:39.978 ******** 2026-04-04 01:05:33.101388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101464 | orchestrator | 2026-04-04 01:05:33.101470 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-04 01:05:33.101475 | orchestrator | Saturday 04 April 2026 01:02:18 +0000 (0:00:02.172) 0:00:42.151 ******** 2026-04-04 01:05:33.101491 | orchestrator | [WARNING]: Skipped 2026-04-04 01:05:33.101497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-04 01:05:33.101518 | orchestrator | due to this access issue: 2026-04-04 01:05:33.101525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-04 01:05:33.101530 | orchestrator | a directory 2026-04-04 01:05:33.101536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:05:33.101541 | orchestrator | 2026-04-04 01:05:33.101547 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:05:33.101554 | orchestrator | Saturday 04 April 2026 01:02:19 +0000 (0:00:00.676) 0:00:42.827 ******** 2026-04-04 01:05:33.101564 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:05:33.101579 | orchestrator | 2026-04-04 01:05:33.101589 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-04 01:05:33.101600 | orchestrator | Saturday 04 April 2026 01:02:20 +0000 (0:00:00.998) 0:00:43.825 ******** 2026-04-04 01:05:33.101614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.101678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.101716 | orchestrator | 2026-04-04 01:05:33.101725 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-04 01:05:33.101735 | orchestrator | Saturday 04 April 2026 01:02:23 +0000 (0:00:02.786) 0:00:46.612 ******** 2026-04-04 01:05:33.101747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.101761 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.101771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.101786 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.101823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.101835 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.101845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.101851 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.101874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.101883 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.101889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.101899 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.101905 | orchestrator | 2026-04-04 01:05:33.101963 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-04 01:05:33.101974 | orchestrator | Saturday 04 April 2026 01:02:25 +0000 (0:00:01.847) 0:00:48.460 ******** 2026-04-04 01:05:33.102008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102052 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102072 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.102086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102096 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.102105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102123 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.102160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102172 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.102182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102192 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.102201 | orchestrator | 2026-04-04 01:05:33.102212 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-04 01:05:33.102221 | orchestrator | Saturday 04 April 2026 01:02:28 +0000 (0:00:02.891) 0:00:51.351 ******** 2026-04-04 01:05:33.102231 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.102240 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102250 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.102259 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.102269 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.102279 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.102289 | orchestrator | 2026-04-04 01:05:33.102298 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-04 01:05:33.102309 | orchestrator | Saturday 04 April 2026 01:02:29 +0000 (0:00:01.670) 0:00:53.021 ******** 2026-04-04 01:05:33.102318 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102328 | orchestrator | 2026-04-04 01:05:33.102338 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-04 01:05:33.102347 | orchestrator | Saturday 04 April 2026 01:02:29 +0000 (0:00:00.187) 0:00:53.209 ******** 2026-04-04 01:05:33.102362 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102372 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.102381 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.102392 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.102402 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.102411 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.102428 | orchestrator | 2026-04-04 01:05:33.102438 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-04 01:05:33.102448 | orchestrator | Saturday 04 April 2026 01:02:30 +0000 (0:00:00.423) 0:00:53.632 ******** 2026-04-04 01:05:33.102458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102468 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.102478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102516 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102535 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.102548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102563 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.102573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102581 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.102590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102599 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.102608 | orchestrator | 2026-04-04 01:05:33.102616 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-04 01:05:33.102626 | orchestrator | Saturday 04 April 2026 01:02:32 +0000 (0:00:01.866) 0:00:55.498 ******** 2026-04-04 01:05:33.102641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102726 | orchestrator | 2026-04-04 01:05:33.102735 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-04 01:05:33.102744 | orchestrator | Saturday 04 April 2026 01:02:34 +0000 (0:00:02.298) 0:00:57.797 ******** 2026-04-04 01:05:33.102763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.102811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.102854 | orchestrator | 2026-04-04 01:05:33.102881 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-04 01:05:33.102891 | orchestrator | Saturday 04 April 2026 01:02:40 +0000 (0:00:05.635) 0:01:03.432 ******** 2026-04-04 01:05:33.102901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102912 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.102927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102938 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.102949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.102965 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.102979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.102990 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103019 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103029 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103038 | orchestrator | 2026-04-04 01:05:33.103047 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-04 01:05:33.103057 | orchestrator | Saturday 04 April 2026 01:02:41 +0000 (0:00:01.472) 0:01:04.905 ******** 2026-04-04 01:05:33.103066 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103075 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103085 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103094 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:33.103104 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:33.103113 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:33.103122 | orchestrator | 2026-04-04 01:05:33.103131 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-04 01:05:33.103144 | orchestrator | Saturday 04 April 2026 01:02:43 +0000 (0:00:02.227) 0:01:07.133 ******** 2026-04-04 01:05:33.103153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103169 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103188 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103213 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.103240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.103257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.103268 | orchestrator | 2026-04-04 01:05:33.103278 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-04 01:05:33.103287 | orchestrator | Saturday 04 April 2026 01:02:47 +0000 (0:00:03.985) 0:01:11.118 ******** 2026-04-04 01:05:33.103297 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103306 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103315 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103324 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103338 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103348 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103357 | orchestrator | 2026-04-04 01:05:33.103367 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-04 01:05:33.103377 | orchestrator | Saturday 04 April 2026 01:02:49 +0000 (0:00:01.729) 0:01:12.847 ******** 2026-04-04 01:05:33.103386 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103396 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103405 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103415 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103424 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103433 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103443 | orchestrator | 2026-04-04 01:05:33.103453 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-04 01:05:33.103463 | orchestrator | Saturday 04 April 2026 01:02:51 +0000 (0:00:01.663) 0:01:14.511 ******** 2026-04-04 01:05:33.103472 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103481 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103491 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103500 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103509 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103518 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103527 | orchestrator | 2026-04-04 01:05:33.103536 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-04 01:05:33.103546 | orchestrator | Saturday 04 April 2026 01:02:53 +0000 (0:00:01.926) 0:01:16.438 ******** 2026-04-04 01:05:33.103555 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103563 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103572 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103581 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103590 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103599 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103608 | orchestrator | 2026-04-04 01:05:33.103617 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-04 01:05:33.103635 | orchestrator | Saturday 04 April 2026 01:02:55 +0000 (0:00:02.361) 0:01:18.799 ******** 2026-04-04 01:05:33.103645 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103654 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103663 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103669 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103675 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103680 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103685 | orchestrator | 2026-04-04 01:05:33.103691 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-04 01:05:33.103696 | orchestrator | Saturday 04 April 2026 01:02:57 +0000 (0:00:02.106) 0:01:20.906 ******** 2026-04-04 01:05:33.103702 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103708 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103713 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103718 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103724 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103734 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103743 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103751 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103760 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103769 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103785 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:05:33.103795 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103804 | orchestrator | 2026-04-04 01:05:33.103813 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-04 01:05:33.103823 | orchestrator | Saturday 04 April 2026 01:02:59 +0000 (0:00:02.014) 0:01:22.921 ******** 2026-04-04 01:05:33.103833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.103844 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.103896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.103909 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.103915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.103921 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.103931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103937 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.103943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103949 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.103958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.103964 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.103969 | orchestrator | 2026-04-04 01:05:33.103975 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-04 01:05:33.103984 | orchestrator | Saturday 04 April 2026 01:03:01 +0000 (0:00:02.181) 0:01:25.103 ******** 2026-04-04 01:05:33.103990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.103996 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.104008 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.104023 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104058 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104071 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104080 | orchestrator | 2026-04-04 01:05:33.104094 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-04 01:05:33.104104 | orchestrator | Saturday 04 April 2026 01:03:03 +0000 (0:00:01.786) 0:01:26.889 ******** 2026-04-04 01:05:33.104112 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104121 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104130 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104138 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104146 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104155 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104163 | orchestrator | 2026-04-04 01:05:33.104171 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-04 01:05:33.104179 | orchestrator | Saturday 04 April 2026 01:03:05 +0000 (0:00:01.880) 0:01:28.769 ******** 2026-04-04 01:05:33.104187 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104195 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104203 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104211 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:05:33.104220 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:05:33.104229 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:05:33.104236 | orchestrator | 2026-04-04 01:05:33.104249 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-04 01:05:33.104257 | orchestrator | Saturday 04 April 2026 01:03:09 +0000 (0:00:03.829) 0:01:32.599 ******** 2026-04-04 01:05:33.104266 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104274 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104282 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104290 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104298 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104306 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104314 | orchestrator | 2026-04-04 01:05:33.104321 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-04 01:05:33.104329 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:01.985) 0:01:34.584 ******** 2026-04-04 01:05:33.104345 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104354 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104361 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104369 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104376 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104384 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104392 | orchestrator | 2026-04-04 01:05:33.104400 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-04 01:05:33.104407 | orchestrator | Saturday 04 April 2026 01:03:13 +0000 (0:00:02.633) 0:01:37.217 ******** 2026-04-04 01:05:33.104415 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104422 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104430 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104438 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104447 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104455 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104463 | orchestrator | 2026-04-04 01:05:33.104472 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-04 01:05:33.104480 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:01.792) 0:01:39.010 ******** 2026-04-04 01:05:33.104489 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104497 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104505 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104514 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104523 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104531 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104541 | orchestrator | 2026-04-04 01:05:33.104546 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-04 01:05:33.104556 | orchestrator | Saturday 04 April 2026 01:03:18 +0000 (0:00:02.404) 0:01:41.415 ******** 2026-04-04 01:05:33.104561 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104566 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104572 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104577 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104582 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104587 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104592 | orchestrator | 2026-04-04 01:05:33.104597 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-04 01:05:33.104602 | orchestrator | Saturday 04 April 2026 01:03:20 +0000 (0:00:02.158) 0:01:43.573 ******** 2026-04-04 01:05:33.104608 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104613 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104618 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104623 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104628 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104633 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104638 | orchestrator | 2026-04-04 01:05:33.104643 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-04 01:05:33.104650 | orchestrator | Saturday 04 April 2026 01:03:22 +0000 (0:00:02.114) 0:01:45.688 ******** 2026-04-04 01:05:33.104659 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104667 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104676 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104684 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104692 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104701 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104709 | orchestrator | 2026-04-04 01:05:33.104719 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-04 01:05:33.104728 | orchestrator | Saturday 04 April 2026 01:03:24 +0000 (0:00:01.966) 0:01:47.654 ******** 2026-04-04 01:05:33.104737 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104746 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104765 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104776 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104784 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104793 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104802 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104811 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104819 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104826 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104831 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:05:33.104836 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104842 | orchestrator | 2026-04-04 01:05:33.104847 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-04 01:05:33.104852 | orchestrator | Saturday 04 April 2026 01:03:26 +0000 (0:00:02.638) 0:01:50.293 ******** 2026-04-04 01:05:33.104879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.104887 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.104896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.104902 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.104907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.104918 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.104923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104929 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.104938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104944 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.104949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.104954 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.104959 | orchestrator | 2026-04-04 01:05:33.104964 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-04 01:05:33.104970 | orchestrator | Saturday 04 April 2026 01:03:29 +0000 (0:00:02.081) 0:01:52.374 ******** 2026-04-04 01:05:33.104977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.104986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.104995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.105001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:33.105009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.105015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:05:33.105023 | orchestrator | 2026-04-04 01:05:33.105029 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-04 01:05:33.105034 | orchestrator | Saturday 04 April 2026 01:03:31 +0000 (0:00:02.760) 0:01:55.134 ******** 2026-04-04 01:05:33.105039 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:05:33.105045 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105050 | orchestrator | } 2026-04-04 01:05:33.105055 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:05:33.105060 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105065 | orchestrator | } 2026-04-04 01:05:33.105071 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:05:33.105083 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105094 | orchestrator | } 2026-04-04 01:05:33.105103 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 01:05:33.105111 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105119 | orchestrator | } 2026-04-04 01:05:33.105128 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 01:05:33.105136 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105145 | orchestrator | } 2026-04-04 01:05:33.105154 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 01:05:33.105162 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:33.105171 | orchestrator | } 2026-04-04 01:05:33.105179 | orchestrator | 2026-04-04 01:05:33.105189 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:05:33.105197 | orchestrator | Saturday 04 April 2026 01:03:32 +0000 (0:00:00.532) 0:01:55.667 ******** 2026-04-04 01:05:33.105213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.105222 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.105232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.105248 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.105262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:33.105272 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.105280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.105290 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.105296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.105301 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.105311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:05:33.105317 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.105322 | orchestrator | 2026-04-04 01:05:33.105327 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:05:33.105332 | orchestrator | Saturday 04 April 2026 01:03:35 +0000 (0:00:03.148) 0:01:58.815 ******** 2026-04-04 01:05:33.105341 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:33.105346 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:33.105351 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:33.105356 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:05:33.105361 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:05:33.105366 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:05:33.105371 | orchestrator | 2026-04-04 01:05:33.105376 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-04 01:05:33.105381 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:00.600) 0:01:59.416 ******** 2026-04-04 01:05:33.105386 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:33.105391 | orchestrator | 2026-04-04 01:05:33.105396 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-04 01:05:33.105401 | orchestrator | Saturday 04 April 2026 01:03:38 +0000 (0:00:02.052) 0:02:01.468 ******** 2026-04-04 01:05:33.105407 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:33.105416 | orchestrator | 2026-04-04 01:05:33.105425 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-04 01:05:33.105437 | orchestrator | Saturday 04 April 2026 01:03:40 +0000 (0:00:02.223) 0:02:03.692 ******** 2026-04-04 01:05:33.105446 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:33.105455 | orchestrator | 2026-04-04 01:05:33.105464 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105473 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:40.206) 0:02:43.898 ******** 2026-04-04 01:05:33.105481 | orchestrator | 2026-04-04 01:05:33.105490 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105498 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:00.119) 0:02:44.018 ******** 2026-04-04 01:05:33.105506 | orchestrator | 2026-04-04 01:05:33.105514 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105523 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:00.097) 0:02:44.116 ******** 2026-04-04 01:05:33.105532 | orchestrator | 2026-04-04 01:05:33.105541 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105549 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:00.128) 0:02:44.244 ******** 2026-04-04 01:05:33.105557 | orchestrator | 2026-04-04 01:05:33.105566 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105575 | orchestrator | Saturday 04 April 2026 01:04:21 +0000 (0:00:00.146) 0:02:44.391 ******** 2026-04-04 01:05:33.105584 | orchestrator | 2026-04-04 01:05:33.105593 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:05:33.105601 | orchestrator | Saturday 04 April 2026 01:04:21 +0000 (0:00:00.139) 0:02:44.530 ******** 2026-04-04 01:05:33.105610 | orchestrator | 2026-04-04 01:05:33.105619 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-04 01:05:33.105628 | orchestrator | Saturday 04 April 2026 01:04:21 +0000 (0:00:00.154) 0:02:44.685 ******** 2026-04-04 01:05:33.105783 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:33.105796 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:33.105802 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:33.105807 | orchestrator | 2026-04-04 01:05:33.105813 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-04 01:05:33.105818 | orchestrator | Saturday 04 April 2026 01:04:43 +0000 (0:00:22.051) 0:03:06.736 ******** 2026-04-04 01:05:33.105823 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:05:33.105828 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:05:33.105834 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:05:33.105839 | orchestrator | 2026-04-04 01:05:33.105844 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:05:33.105850 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:05:33.105881 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-04 01:05:33.105887 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-04 01:05:33.105893 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:05:33.105903 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:05:33.105909 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:05:33.105914 | orchestrator | 2026-04-04 01:05:33.105919 | orchestrator | 2026-04-04 01:05:33.105924 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:05:33.105929 | orchestrator | Saturday 04 April 2026 01:05:29 +0000 (0:00:46.551) 0:03:53.288 ******** 2026-04-04 01:05:33.105934 | orchestrator | =============================================================================== 2026-04-04 01:05:33.105943 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 46.55s 2026-04-04 01:05:33.105955 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.21s 2026-04-04 01:05:33.105965 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.05s 2026-04-04 01:05:33.105973 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.59s 2026-04-04 01:05:33.105981 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.16s 2026-04-04 01:05:33.105990 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.64s 2026-04-04 01:05:33.105998 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.08s 2026-04-04 01:05:33.106007 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.99s 2026-04-04 01:05:33.106042 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.83s 2026-04-04 01:05:33.106048 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.54s 2026-04-04 01:05:33.106053 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.32s 2026-04-04 01:05:33.106058 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.15s 2026-04-04 01:05:33.106063 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.91s 2026-04-04 01:05:33.106068 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.89s 2026-04-04 01:05:33.106077 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.79s 2026-04-04 01:05:33.106083 | orchestrator | service-check-containers : neutron | Check containers ------------------- 2.76s 2026-04-04 01:05:33.106088 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 2.64s 2026-04-04 01:05:33.106093 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 2.63s 2026-04-04 01:05:33.106098 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 2.41s 2026-04-04 01:05:33.106103 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 2.36s 2026-04-04 01:05:33.106109 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:33.106114 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:33.106119 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:33.106124 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:33.106135 | orchestrator | 2026-04-04 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:36.135083 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:36.137192 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:36.139310 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:36.141341 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:36.141567 | orchestrator | 2026-04-04 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:39.170660 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:39.173368 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:39.174357 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:39.178006 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state STARTED 2026-04-04 01:05:39.178078 | orchestrator | 2026-04-04 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:42.225365 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:42.225556 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:42.226319 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:42.226987 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state STARTED 2026-04-04 01:05:42.227461 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 2748df1f-0d5b-4221-8f88-f83480ea0759 is in state SUCCESS 2026-04-04 01:05:42.227483 | orchestrator | 2026-04-04 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:45.261235 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:05:45.267386 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:45.268431 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:45.269901 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:45.272734 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 2b34a236-acba-4224-9d41-d6d0e5d1b906 is in state SUCCESS 2026-04-04 01:05:45.274212 | orchestrator | 2026-04-04 01:05:45.274270 | orchestrator | 2026-04-04 01:05:45.274280 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-04 01:05:45.274288 | orchestrator | 2026-04-04 01:05:45.274295 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-04 01:05:45.274303 | orchestrator | Saturday 04 April 2026 01:04:14 +0000 (0:00:00.088) 0:00:00.088 ******** 2026-04-04 01:05:45.274310 | orchestrator | changed: [localhost] 2026-04-04 01:05:45.274317 | orchestrator | 2026-04-04 01:05:45.274324 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-04 01:05:45.274331 | orchestrator | Saturday 04 April 2026 01:04:15 +0000 (0:00:00.775) 0:00:00.864 ******** 2026-04-04 01:05:45.274337 | orchestrator | changed: [localhost] 2026-04-04 01:05:45.274344 | orchestrator | 2026-04-04 01:05:45.274350 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-04 01:05:45.274371 | orchestrator | Saturday 04 April 2026 01:04:50 +0000 (0:00:35.466) 0:00:36.331 ******** 2026-04-04 01:05:45.274386 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-04 01:05:45.274394 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-04-04 01:05:45.274401 | orchestrator | changed: [localhost] 2026-04-04 01:05:45.274407 | orchestrator | 2026-04-04 01:05:45.274414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:05:45.274421 | orchestrator | 2026-04-04 01:05:45.274428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:05:45.274435 | orchestrator | Saturday 04 April 2026 01:05:38 +0000 (0:00:48.405) 0:01:24.736 ******** 2026-04-04 01:05:45.274442 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:45.274449 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:45.274455 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:45.274461 | orchestrator | 2026-04-04 01:05:45.274467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:05:45.274473 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:00.247) 0:01:24.984 ******** 2026-04-04 01:05:45.274480 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-04 01:05:45.274486 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-04 01:05:45.274493 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-04 01:05:45.274500 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-04 01:05:45.274507 | orchestrator | 2026-04-04 01:05:45.274513 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-04 01:05:45.274519 | orchestrator | skipping: no hosts matched 2026-04-04 01:05:45.274526 | orchestrator | 2026-04-04 01:05:45.274532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:05:45.274538 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:45.274593 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:45.274600 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:45.274606 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:45.274613 | orchestrator | 2026-04-04 01:05:45.274619 | orchestrator | 2026-04-04 01:05:45.274625 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:05:45.274631 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:00.353) 0:01:25.337 ******** 2026-04-04 01:05:45.274637 | orchestrator | =============================================================================== 2026-04-04 01:05:45.274644 | orchestrator | Download ironic-agent kernel ------------------------------------------- 48.41s 2026-04-04 01:05:45.275006 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 35.47s 2026-04-04 01:05:45.275021 | orchestrator | Ensure the destination directory exists --------------------------------- 0.78s 2026-04-04 01:05:45.275027 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-04-04 01:05:45.275033 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-04-04 01:05:45.275040 | orchestrator | 2026-04-04 01:05:45.275046 | orchestrator | 2026-04-04 01:05:45.275053 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:05:45.275059 | orchestrator | 2026-04-04 01:05:45.275065 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:05:45.275072 | orchestrator | Saturday 04 April 2026 01:03:08 +0000 (0:00:00.393) 0:00:00.393 ******** 2026-04-04 01:05:45.275087 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:45.275094 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:45.275101 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:45.275107 | orchestrator | 2026-04-04 01:05:45.275114 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:05:45.275148 | orchestrator | Saturday 04 April 2026 01:03:08 +0000 (0:00:00.383) 0:00:00.777 ******** 2026-04-04 01:05:45.275183 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-04 01:05:45.275191 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-04 01:05:45.275197 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-04 01:05:45.275204 | orchestrator | 2026-04-04 01:05:45.275210 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-04 01:05:45.275216 | orchestrator | 2026-04-04 01:05:45.275223 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:05:45.275230 | orchestrator | Saturday 04 April 2026 01:03:08 +0000 (0:00:00.281) 0:00:01.058 ******** 2026-04-04 01:05:45.275246 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:05:45.275254 | orchestrator | 2026-04-04 01:05:45.275261 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-04 01:05:45.275267 | orchestrator | Saturday 04 April 2026 01:03:09 +0000 (0:00:00.546) 0:00:01.605 ******** 2026-04-04 01:05:45.275274 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-04 01:05:45.275280 | orchestrator | 2026-04-04 01:05:45.275287 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-04-04 01:05:45.275293 | orchestrator | Saturday 04 April 2026 01:03:12 +0000 (0:00:03.397) 0:00:05.002 ******** 2026-04-04 01:05:45.275300 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-04 01:05:45.275306 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-04 01:05:45.275313 | orchestrator | 2026-04-04 01:05:45.275531 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-04 01:05:45.275546 | orchestrator | Saturday 04 April 2026 01:03:18 +0000 (0:00:06.196) 0:00:11.199 ******** 2026-04-04 01:05:45.275553 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:05:45.275560 | orchestrator | 2026-04-04 01:05:45.275566 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-04 01:05:45.275572 | orchestrator | Saturday 04 April 2026 01:03:21 +0000 (0:00:03.093) 0:00:14.292 ******** 2026-04-04 01:05:45.275578 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-04 01:05:45.275583 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:05:45.275590 | orchestrator | 2026-04-04 01:05:45.275597 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-04 01:05:45.275603 | orchestrator | Saturday 04 April 2026 01:03:25 +0000 (0:00:03.212) 0:00:17.504 ******** 2026-04-04 01:05:45.275609 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:05:45.275615 | orchestrator | 2026-04-04 01:05:45.275621 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-04-04 01:05:45.275627 | orchestrator | Saturday 04 April 2026 01:03:28 +0000 (0:00:03.207) 0:00:20.712 ******** 2026-04-04 01:05:45.275633 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-04 01:05:45.275640 | orchestrator | 2026-04-04 01:05:45.275646 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-04 01:05:45.275653 | orchestrator | Saturday 04 April 2026 01:03:32 +0000 (0:00:03.676) 0:00:24.389 ******** 2026-04-04 01:05:45.275663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.275680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.275728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.275755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.275885 | orchestrator | 2026-04-04 01:05:45.275891 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-04 01:05:45.275897 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:04.278) 0:00:28.668 ******** 2026-04-04 01:05:45.275903 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.275908 | orchestrator | 2026-04-04 01:05:45.275914 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-04 01:05:45.275921 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:00.124) 0:00:28.793 ******** 2026-04-04 01:05:45.275927 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.275934 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.275939 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.275942 | orchestrator | 2026-04-04 01:05:45.275946 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:05:45.275950 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:00.251) 0:00:29.044 ******** 2026-04-04 01:05:45.275954 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:05:45.275959 | orchestrator | 2026-04-04 01:05:45.275963 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-04 01:05:45.275967 | orchestrator | Saturday 04 April 2026 01:03:37 +0000 (0:00:00.475) 0:00:29.520 ******** 2026-04-04 01:05:45.275971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276354 | orchestrator | 2026-04-04 01:05:45.276361 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-04 01:05:45.276366 | orchestrator | Saturday 04 April 2026 01:03:43 +0000 (0:00:05.951) 0:00:35.471 ******** 2026-04-04 01:05:45.276371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276489 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.276493 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.276497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276501 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.276505 | orchestrator | 2026-04-04 01:05:45.276509 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-04 01:05:45.276513 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:01.080) 0:00:36.551 ******** 2026-04-04 01:05:45.276517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.276554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.276580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276625 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.276629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276633 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.276637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.276645 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.276649 | orchestrator | 2026-04-04 01:05:45.276653 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-04 01:05:45.276657 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:01.338) 0:00:37.889 ******** 2026-04-04 01:05:45.276661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_2026-04-04 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:45.276797 | orchestrator | name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276823 | orchestrator | 2026-04-04 01:05:45.276828 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-04 01:05:45.276835 | orchestrator | Saturday 04 April 2026 01:03:52 +0000 (0:00:06.849) 0:00:44.739 ******** 2026-04-04 01:05:45.276878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.276904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.276997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277025 | orchestrator | 2026-04-04 01:05:45.277029 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-04 01:05:45.277034 | orchestrator | Saturday 04 April 2026 01:04:08 +0000 (0:00:16.199) 0:01:00.939 ******** 2026-04-04 01:05:45.277039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:05:45.277044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:05:45.277048 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:05:45.277053 | orchestrator | 2026-04-04 01:05:45.277057 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-04 01:05:45.277062 | orchestrator | Saturday 04 April 2026 01:04:12 +0000 (0:00:03.865) 0:01:04.804 ******** 2026-04-04 01:05:45.277066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:05:45.277071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:05:45.277075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:05:45.277080 | orchestrator | 2026-04-04 01:05:45.277094 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-04 01:05:45.277099 | orchestrator | Saturday 04 April 2026 01:04:14 +0000 (0:00:02.224) 0:01:07.029 ******** 2026-04-04 01:05:45.277106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277229 | orchestrator | 2026-04-04 01:05:45.277234 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-04 01:05:45.277239 | orchestrator | Saturday 04 April 2026 01:04:17 +0000 (0:00:02.349) 0:01:09.379 ******** 2026-04-04 01:05:45.277246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277356 | orchestrator | 2026-04-04 01:05:45.277364 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:05:45.277370 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:02.170) 0:01:11.549 ******** 2026-04-04 01:05:45.277380 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.277387 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.277392 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.277398 | orchestrator | 2026-04-04 01:05:45.277404 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-04 01:05:45.277409 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.251) 0:01:11.801 ******** 2026-04-04 01:05:45.277419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.277432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277465 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.277476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.277490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277537 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.277545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.277553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277590 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.277597 | orchestrator | 2026-04-04 01:05:45.277606 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-04 01:05:45.277610 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:00.966) 0:01:12.767 ******** 2026-04-04 01:05:45.277614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.277618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.277622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:05:45.277635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:45.277734 | orchestrator | 2026-04-04 01:05:45.277741 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-04 01:05:45.277747 | orchestrator | Saturday 04 April 2026 01:04:25 +0000 (0:00:05.078) 0:01:17.845 ******** 2026-04-04 01:05:45.277754 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:05:45.277758 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:45.277762 | orchestrator | } 2026-04-04 01:05:45.277766 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:05:45.277770 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:45.277774 | orchestrator | } 2026-04-04 01:05:45.277778 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:05:45.277782 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:05:45.277786 | orchestrator | } 2026-04-04 01:05:45.277790 | orchestrator | 2026-04-04 01:05:45.277794 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:05:45.277798 | orchestrator | Saturday 04 April 2026 01:04:25 +0000 (0:00:00.470) 0:01:18.316 ******** 2026-04-04 01:05:45.277802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.277814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277924 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.277961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.277966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.277975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.277997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.278001 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.278005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:05:45.278045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:05:45.278052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.278059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.278066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.278070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:05:45.278074 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.278081 | orchestrator | 2026-04-04 01:05:45.278088 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:05:45.278095 | orchestrator | Saturday 04 April 2026 01:04:26 +0000 (0:00:00.902) 0:01:19.218 ******** 2026-04-04 01:05:45.278102 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:45.278109 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:45.278113 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:45.278117 | orchestrator | 2026-04-04 01:05:45.278121 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-04 01:05:45.278131 | orchestrator | Saturday 04 April 2026 01:04:27 +0000 (0:00:00.203) 0:01:19.422 ******** 2026-04-04 01:05:45.278135 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-04 01:05:45.278139 | orchestrator | 2026-04-04 01:05:45.278143 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-04 01:05:45.278147 | orchestrator | Saturday 04 April 2026 01:04:29 +0000 (0:00:02.182) 0:01:21.605 ******** 2026-04-04 01:05:45.278151 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:05:45.278155 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-04 01:05:45.278159 | orchestrator | 2026-04-04 01:05:45.278162 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-04 01:05:45.278166 | orchestrator | Saturday 04 April 2026 01:04:31 +0000 (0:00:02.487) 0:01:24.092 ******** 2026-04-04 01:05:45.278170 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278174 | orchestrator | 2026-04-04 01:05:45.278178 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:05:45.278182 | orchestrator | Saturday 04 April 2026 01:04:44 +0000 (0:00:13.206) 0:01:37.298 ******** 2026-04-04 01:05:45.278186 | orchestrator | 2026-04-04 01:05:45.278190 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:05:45.278193 | orchestrator | Saturday 04 April 2026 01:04:45 +0000 (0:00:00.089) 0:01:37.387 ******** 2026-04-04 01:05:45.278197 | orchestrator | 2026-04-04 01:05:45.278201 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:05:45.278205 | orchestrator | Saturday 04 April 2026 01:04:45 +0000 (0:00:00.142) 0:01:37.532 ******** 2026-04-04 01:05:45.278209 | orchestrator | 2026-04-04 01:05:45.278213 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-04 01:05:45.278216 | orchestrator | Saturday 04 April 2026 01:04:45 +0000 (0:00:00.110) 0:01:37.642 ******** 2026-04-04 01:05:45.278220 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278224 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278228 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278232 | orchestrator | 2026-04-04 01:05:45.278235 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-04 01:05:45.278239 | orchestrator | Saturday 04 April 2026 01:04:59 +0000 (0:00:13.925) 0:01:51.567 ******** 2026-04-04 01:05:45.278243 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278247 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278250 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278254 | orchestrator | 2026-04-04 01:05:45.278258 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-04 01:05:45.278262 | orchestrator | Saturday 04 April 2026 01:05:07 +0000 (0:00:08.680) 0:02:00.248 ******** 2026-04-04 01:05:45.278265 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278269 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278273 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278277 | orchestrator | 2026-04-04 01:05:45.278281 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-04 01:05:45.278284 | orchestrator | Saturday 04 April 2026 01:05:13 +0000 (0:00:05.291) 0:02:05.540 ******** 2026-04-04 01:05:45.278288 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278292 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278296 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278299 | orchestrator | 2026-04-04 01:05:45.278303 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-04 01:05:45.278307 | orchestrator | Saturday 04 April 2026 01:05:18 +0000 (0:00:05.077) 0:02:10.617 ******** 2026-04-04 01:05:45.278314 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278318 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278322 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278326 | orchestrator | 2026-04-04 01:05:45.278330 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-04 01:05:45.278334 | orchestrator | Saturday 04 April 2026 01:05:26 +0000 (0:00:08.266) 0:02:18.884 ******** 2026-04-04 01:05:45.278341 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:45.278345 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:45.278349 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278353 | orchestrator | 2026-04-04 01:05:45.278357 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-04 01:05:45.278361 | orchestrator | Saturday 04 April 2026 01:05:34 +0000 (0:00:08.439) 0:02:27.324 ******** 2026-04-04 01:05:45.278365 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:45.278369 | orchestrator | 2026-04-04 01:05:45.278373 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:05:45.278379 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 01:05:45.278384 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:05:45.278388 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:05:45.278392 | orchestrator | 2026-04-04 01:05:45.278396 | orchestrator | 2026-04-04 01:05:45.278399 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:05:45.278403 | orchestrator | Saturday 04 April 2026 01:05:42 +0000 (0:00:07.310) 0:02:34.634 ******** 2026-04-04 01:05:45.278407 | orchestrator | =============================================================================== 2026-04-04 01:05:45.278411 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.20s 2026-04-04 01:05:45.278415 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.93s 2026-04-04 01:05:45.278418 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.21s 2026-04-04 01:05:45.278422 | orchestrator | designate : Restart designate-api container ----------------------------- 8.68s 2026-04-04 01:05:45.278426 | orchestrator | designate : Restart designate-worker container -------------------------- 8.44s 2026-04-04 01:05:45.278430 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.27s 2026-04-04 01:05:45.278433 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.31s 2026-04-04 01:05:45.278437 | orchestrator | designate : Copying over config.json files for services ----------------- 6.85s 2026-04-04 01:05:45.278441 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 6.20s 2026-04-04 01:05:45.278445 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.95s 2026-04-04 01:05:45.278448 | orchestrator | designate : Restart designate-central container ------------------------- 5.29s 2026-04-04 01:05:45.278452 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.08s 2026-04-04 01:05:45.278456 | orchestrator | designate : Restart designate-producer container ------------------------ 5.08s 2026-04-04 01:05:45.278460 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.28s 2026-04-04 01:05:45.278464 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.87s 2026-04-04 01:05:45.278468 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 3.68s 2026-04-04 01:05:45.278471 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 3.40s 2026-04-04 01:05:45.278475 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.21s 2026-04-04 01:05:45.278479 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.21s 2026-04-04 01:05:45.278483 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.09s 2026-04-04 01:05:48.307542 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:05:48.307824 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:48.308487 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:48.309054 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:48.309084 | orchestrator | 2026-04-04 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:51.361487 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:05:51.361662 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:51.362976 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:51.364135 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:51.364177 | orchestrator | 2026-04-04 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:54.400682 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:05:54.402478 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:54.403988 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:54.404853 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:54.404892 | orchestrator | 2026-04-04 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:57.438745 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:05:57.439618 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:05:57.442209 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:05:57.443836 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:05:57.443871 | orchestrator | 2026-04-04 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:00.475599 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:00.482037 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:00.482654 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:00.483704 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:00.483739 | orchestrator | 2026-04-04 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:03.518258 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:03.519902 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:03.521460 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:03.522733 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:03.522778 | orchestrator | 2026-04-04 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:06.571731 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:06.571964 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:06.572668 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:06.573483 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:06.573519 | orchestrator | 2026-04-04 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:09.607631 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:09.609011 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:09.609654 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:09.610311 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:09.610421 | orchestrator | 2026-04-04 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:12.636835 | orchestrator | 2026-04-04 01:06:12 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:12.639177 | orchestrator | 2026-04-04 01:06:12 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:12.639967 | orchestrator | 2026-04-04 01:06:12 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:12.640832 | orchestrator | 2026-04-04 01:06:12 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:12.640951 | orchestrator | 2026-04-04 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:15.677697 | orchestrator | 2026-04-04 01:06:15 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:15.680284 | orchestrator | 2026-04-04 01:06:15 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:15.683073 | orchestrator | 2026-04-04 01:06:15 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:15.684498 | orchestrator | 2026-04-04 01:06:15 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:15.684936 | orchestrator | 2026-04-04 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:18.718535 | orchestrator | 2026-04-04 01:06:18 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:18.718872 | orchestrator | 2026-04-04 01:06:18 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:18.719893 | orchestrator | 2026-04-04 01:06:18 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:18.720745 | orchestrator | 2026-04-04 01:06:18 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:18.721210 | orchestrator | 2026-04-04 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:21.752699 | orchestrator | 2026-04-04 01:06:21 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:21.752847 | orchestrator | 2026-04-04 01:06:21 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:21.753544 | orchestrator | 2026-04-04 01:06:21 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:21.755539 | orchestrator | 2026-04-04 01:06:21 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:21.755634 | orchestrator | 2026-04-04 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:24.802626 | orchestrator | 2026-04-04 01:06:24 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:24.803186 | orchestrator | 2026-04-04 01:06:24 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:24.803723 | orchestrator | 2026-04-04 01:06:24 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:24.805799 | orchestrator | 2026-04-04 01:06:24 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:24.805832 | orchestrator | 2026-04-04 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:27.842003 | orchestrator | 2026-04-04 01:06:27 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:27.842481 | orchestrator | 2026-04-04 01:06:27 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:27.844686 | orchestrator | 2026-04-04 01:06:27 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:27.845973 | orchestrator | 2026-04-04 01:06:27 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:27.846004 | orchestrator | 2026-04-04 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:30.880861 | orchestrator | 2026-04-04 01:06:30 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:30.881297 | orchestrator | 2026-04-04 01:06:30 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:30.883132 | orchestrator | 2026-04-04 01:06:30 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:30.883978 | orchestrator | 2026-04-04 01:06:30 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:30.884053 | orchestrator | 2026-04-04 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:33.911863 | orchestrator | 2026-04-04 01:06:33 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:33.912112 | orchestrator | 2026-04-04 01:06:33 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:33.914208 | orchestrator | 2026-04-04 01:06:33 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:33.914864 | orchestrator | 2026-04-04 01:06:33 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:33.914925 | orchestrator | 2026-04-04 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:36.945848 | orchestrator | 2026-04-04 01:06:36 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:36.951294 | orchestrator | 2026-04-04 01:06:36 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:36.951824 | orchestrator | 2026-04-04 01:06:36 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:36.952516 | orchestrator | 2026-04-04 01:06:36 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:36.952528 | orchestrator | 2026-04-04 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:39.977137 | orchestrator | 2026-04-04 01:06:39 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:39.977241 | orchestrator | 2026-04-04 01:06:39 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:39.977819 | orchestrator | 2026-04-04 01:06:39 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:39.978494 | orchestrator | 2026-04-04 01:06:39 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:39.978523 | orchestrator | 2026-04-04 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:43.003222 | orchestrator | 2026-04-04 01:06:43 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:43.004377 | orchestrator | 2026-04-04 01:06:43 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:43.005742 | orchestrator | 2026-04-04 01:06:43 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:43.007919 | orchestrator | 2026-04-04 01:06:43 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:43.007966 | orchestrator | 2026-04-04 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:46.030294 | orchestrator | 2026-04-04 01:06:46 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:46.032123 | orchestrator | 2026-04-04 01:06:46 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:46.033919 | orchestrator | 2026-04-04 01:06:46 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state STARTED 2026-04-04 01:06:46.036024 | orchestrator | 2026-04-04 01:06:46 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:46.036080 | orchestrator | 2026-04-04 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:49.063666 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:49.065321 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:49.066250 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task 46345d6d-fe69-48a7-a732-20e39e540d3a is in state SUCCESS 2026-04-04 01:06:49.068624 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task 405a6e7c-2ebb-4e73-8df8-1ff3d67275da is in state STARTED 2026-04-04 01:06:49.068665 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:49.068674 | orchestrator | 2026-04-04 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:49.070085 | orchestrator | 2026-04-04 01:06:49.070115 | orchestrator | 2026-04-04 01:06:49.070120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:06:49.070125 | orchestrator | 2026-04-04 01:06:49.070129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:06:49.070133 | orchestrator | Saturday 04 April 2026 01:05:35 +0000 (0:00:00.788) 0:00:00.788 ******** 2026-04-04 01:06:49.070138 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:49.070142 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:06:49.070146 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:06:49.070150 | orchestrator | 2026-04-04 01:06:49.070154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:06:49.070158 | orchestrator | Saturday 04 April 2026 01:05:35 +0000 (0:00:00.285) 0:00:01.073 ******** 2026-04-04 01:06:49.070162 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-04 01:06:49.070166 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-04 01:06:49.070170 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-04 01:06:49.070174 | orchestrator | 2026-04-04 01:06:49.070178 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-04 01:06:49.070182 | orchestrator | 2026-04-04 01:06:49.070198 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:06:49.070202 | orchestrator | Saturday 04 April 2026 01:05:35 +0000 (0:00:00.278) 0:00:01.352 ******** 2026-04-04 01:06:49.070206 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:49.070210 | orchestrator | 2026-04-04 01:06:49.070214 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-04 01:06:49.070218 | orchestrator | Saturday 04 April 2026 01:05:36 +0000 (0:00:00.449) 0:00:01.802 ******** 2026-04-04 01:06:49.070221 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-04 01:06:49.070225 | orchestrator | 2026-04-04 01:06:49.070229 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-04 01:06:49.070233 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:03.478) 0:00:05.280 ******** 2026-04-04 01:06:49.070237 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-04 01:06:49.070241 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-04 01:06:49.070245 | orchestrator | 2026-04-04 01:06:49.070248 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-04 01:06:49.070252 | orchestrator | Saturday 04 April 2026 01:05:46 +0000 (0:00:06.757) 0:00:12.037 ******** 2026-04-04 01:06:49.070263 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:06:49.070267 | orchestrator | 2026-04-04 01:06:49.070271 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-04 01:06:49.070275 | orchestrator | Saturday 04 April 2026 01:05:49 +0000 (0:00:03.536) 0:00:15.574 ******** 2026-04-04 01:06:49.070279 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-04 01:06:49.070282 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:06:49.070286 | orchestrator | 2026-04-04 01:06:49.070290 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-04 01:06:49.070294 | orchestrator | Saturday 04 April 2026 01:05:54 +0000 (0:00:04.887) 0:00:20.462 ******** 2026-04-04 01:06:49.070298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:06:49.070302 | orchestrator | 2026-04-04 01:06:49.070305 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-04 01:06:49.070309 | orchestrator | Saturday 04 April 2026 01:05:58 +0000 (0:00:03.587) 0:00:24.049 ******** 2026-04-04 01:06:49.070313 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-04 01:06:49.070317 | orchestrator | 2026-04-04 01:06:49.070321 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:06:49.070325 | orchestrator | Saturday 04 April 2026 01:06:01 +0000 (0:00:03.512) 0:00:27.562 ******** 2026-04-04 01:06:49.070328 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070332 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070336 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070340 | orchestrator | 2026-04-04 01:06:49.070344 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-04 01:06:49.070348 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:00.320) 0:00:27.882 ******** 2026-04-04 01:06:49.070360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070381 | orchestrator | 2026-04-04 01:06:49.070385 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-04 01:06:49.070389 | orchestrator | Saturday 04 April 2026 01:06:03 +0000 (0:00:01.692) 0:00:29.575 ******** 2026-04-04 01:06:49.070393 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070396 | orchestrator | 2026-04-04 01:06:49.070400 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-04 01:06:49.070404 | orchestrator | Saturday 04 April 2026 01:06:03 +0000 (0:00:00.121) 0:00:29.696 ******** 2026-04-04 01:06:49.070408 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070412 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070416 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070419 | orchestrator | 2026-04-04 01:06:49.070423 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:06:49.070427 | orchestrator | Saturday 04 April 2026 01:06:04 +0000 (0:00:00.271) 0:00:29.967 ******** 2026-04-04 01:06:49.070431 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:49.070435 | orchestrator | 2026-04-04 01:06:49.070439 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-04 01:06:49.070442 | orchestrator | Saturday 04 April 2026 01:06:04 +0000 (0:00:00.612) 0:00:30.580 ******** 2026-04-04 01:06:49.070446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070469 | orchestrator | 2026-04-04 01:06:49.070473 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-04 01:06:49.070477 | orchestrator | Saturday 04 April 2026 01:06:06 +0000 (0:00:01.504) 0:00:32.084 ******** 2026-04-04 01:06:49.070481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070488 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070500 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070508 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070511 | orchestrator | 2026-04-04 01:06:49.070515 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-04 01:06:49.070519 | orchestrator | Saturday 04 April 2026 01:06:06 +0000 (0:00:00.616) 0:00:32.701 ******** 2026-04-04 01:06:49.070525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070529 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070540 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070552 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070556 | orchestrator | 2026-04-04 01:06:49.070560 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-04 01:06:49.070564 | orchestrator | Saturday 04 April 2026 01:06:07 +0000 (0:00:00.628) 0:00:33.329 ******** 2026-04-04 01:06:49.070568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070587 | orchestrator | 2026-04-04 01:06:49.070591 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-04 01:06:49.070594 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:01.642) 0:00:34.972 ******** 2026-04-04 01:06:49.070601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070618 | orchestrator | 2026-04-04 01:06:49.070622 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-04 01:06:49.070626 | orchestrator | Saturday 04 April 2026 01:06:11 +0000 (0:00:02.471) 0:00:37.444 ******** 2026-04-04 01:06:49.070630 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-04 01:06:49.070634 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070638 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-04 01:06:49.070642 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070646 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-04 01:06:49.070650 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070653 | orchestrator | 2026-04-04 01:06:49.070657 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-04 01:06:49.070661 | orchestrator | Saturday 04 April 2026 01:06:12 +0000 (0:00:00.481) 0:00:37.925 ******** 2026-04-04 01:06:49.070665 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:49.070669 | orchestrator | 2026-04-04 01:06:49.070673 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-04 01:06:49.070678 | orchestrator | Saturday 04 April 2026 01:06:13 +0000 (0:00:00.945) 0:00:38.871 ******** 2026-04-04 01:06:49.070682 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.070686 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:49.070690 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:49.070694 | orchestrator | 2026-04-04 01:06:49.070697 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-04 01:06:49.070701 | orchestrator | Saturday 04 April 2026 01:06:14 +0000 (0:00:01.630) 0:00:40.501 ******** 2026-04-04 01:06:49.070705 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.070709 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:49.070712 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:49.070716 | orchestrator | 2026-04-04 01:06:49.070720 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-04 01:06:49.070725 | orchestrator | Saturday 04 April 2026 01:06:15 +0000 (0:00:01.228) 0:00:41.730 ******** 2026-04-04 01:06:49.070730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070786 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070799 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070814 | orchestrator | 2026-04-04 01:06:49.070818 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-04 01:06:49.070821 | orchestrator | Saturday 04 April 2026 01:06:16 +0000 (0:00:00.737) 0:00:42.467 ******** 2026-04-04 01:06:49.070829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-04 01:06:49.070846 | orchestrator | 2026-04-04 01:06:49.070850 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-04 01:06:49.070854 | orchestrator | Saturday 04 April 2026 01:06:18 +0000 (0:00:01.414) 0:00:43.882 ******** 2026-04-04 01:06:49.070858 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:06:49.070861 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:06:49.070865 | orchestrator | } 2026-04-04 01:06:49.070869 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:06:49.070875 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:06:49.070885 | orchestrator | } 2026-04-04 01:06:49.070891 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:06:49.070898 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:06:49.070904 | orchestrator | } 2026-04-04 01:06:49.070909 | orchestrator | 2026-04-04 01:06:49.070915 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:06:49.070922 | orchestrator | Saturday 04 April 2026 01:06:18 +0000 (0:00:00.395) 0:00:44.278 ******** 2026-04-04 01:06:49.070933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070940 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:49.070953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070960 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:49.070971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-04 01:06:49.070979 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:49.070988 | orchestrator | 2026-04-04 01:06:49.070995 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-04 01:06:49.071001 | orchestrator | Saturday 04 April 2026 01:06:19 +0000 (0:00:01.064) 0:00:45.342 ******** 2026-04-04 01:06:49.071008 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.071014 | orchestrator | 2026-04-04 01:06:49.071019 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-04 01:06:49.071025 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:02.661) 0:00:48.004 ******** 2026-04-04 01:06:49.071031 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.071037 | orchestrator | 2026-04-04 01:06:49.071042 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-04 01:06:49.071049 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:02.635) 0:00:50.639 ******** 2026-04-04 01:06:49.071054 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.071060 | orchestrator | 2026-04-04 01:06:49.071067 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:06:49.071074 | orchestrator | Saturday 04 April 2026 01:06:39 +0000 (0:00:14.895) 0:01:05.534 ******** 2026-04-04 01:06:49.071080 | orchestrator | 2026-04-04 01:06:49.071087 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:06:49.071093 | orchestrator | Saturday 04 April 2026 01:06:39 +0000 (0:00:00.057) 0:01:05.592 ******** 2026-04-04 01:06:49.071099 | orchestrator | 2026-04-04 01:06:49.071106 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:06:49.071113 | orchestrator | Saturday 04 April 2026 01:06:39 +0000 (0:00:00.057) 0:01:05.650 ******** 2026-04-04 01:06:49.071117 | orchestrator | 2026-04-04 01:06:49.071121 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-04 01:06:49.071128 | orchestrator | Saturday 04 April 2026 01:06:39 +0000 (0:00:00.062) 0:01:05.713 ******** 2026-04-04 01:06:49.071132 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:49.071136 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:49.071140 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:49.071144 | orchestrator | 2026-04-04 01:06:49.071151 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:06:49.071156 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 01:06:49.071161 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:06:49.071165 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:06:49.071168 | orchestrator | 2026-04-04 01:06:49.071172 | orchestrator | 2026-04-04 01:06:49.071176 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:06:49.071180 | orchestrator | Saturday 04 April 2026 01:06:47 +0000 (0:00:07.457) 0:01:13.170 ******** 2026-04-04 01:06:49.071184 | orchestrator | =============================================================================== 2026-04-04 01:06:49.071187 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.90s 2026-04-04 01:06:49.071191 | orchestrator | placement : Restart placement-api container ----------------------------- 7.46s 2026-04-04 01:06:49.071195 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.76s 2026-04-04 01:06:49.071199 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.89s 2026-04-04 01:06:49.071203 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.59s 2026-04-04 01:06:49.071206 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.54s 2026-04-04 01:06:49.071210 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.51s 2026-04-04 01:06:49.071214 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.48s 2026-04-04 01:06:49.071218 | orchestrator | placement : Creating placement databases -------------------------------- 2.66s 2026-04-04 01:06:49.071221 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.64s 2026-04-04 01:06:49.071225 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.47s 2026-04-04 01:06:49.071229 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.69s 2026-04-04 01:06:49.071233 | orchestrator | placement : Copying over config.json files for services ----------------- 1.64s 2026-04-04 01:06:49.071239 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 1.63s 2026-04-04 01:06:49.071243 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.50s 2026-04-04 01:06:49.071247 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.42s 2026-04-04 01:06:49.071251 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.23s 2026-04-04 01:06:49.071254 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.06s 2026-04-04 01:06:49.071258 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 0.95s 2026-04-04 01:06:49.071262 | orchestrator | placement : Copying over existing policy file --------------------------- 0.74s 2026-04-04 01:06:52.100481 | orchestrator | 2026-04-04 01:06:52 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:52.101040 | orchestrator | 2026-04-04 01:06:52 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:52.101850 | orchestrator | 2026-04-04 01:06:52 | INFO  | Task 405a6e7c-2ebb-4e73-8df8-1ff3d67275da is in state STARTED 2026-04-04 01:06:52.102281 | orchestrator | 2026-04-04 01:06:52 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:52.102306 | orchestrator | 2026-04-04 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:55.151567 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:55.152235 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:55.153706 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task 405a6e7c-2ebb-4e73-8df8-1ff3d67275da is in state SUCCESS 2026-04-04 01:06:55.154688 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:55.155599 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:06:55.155615 | orchestrator | 2026-04-04 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:58.191260 | orchestrator | 2026-04-04 01:06:58 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:06:58.191377 | orchestrator | 2026-04-04 01:06:58 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:06:58.191387 | orchestrator | 2026-04-04 01:06:58 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:06:58.191572 | orchestrator | 2026-04-04 01:06:58 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:06:58.191586 | orchestrator | 2026-04-04 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:01.233871 | orchestrator | 2026-04-04 01:07:01 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state STARTED 2026-04-04 01:07:01.236845 | orchestrator | 2026-04-04 01:07:01 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:01.238230 | orchestrator | 2026-04-04 01:07:01 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:01.239908 | orchestrator | 2026-04-04 01:07:01 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:01.239946 | orchestrator | 2026-04-04 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:04.280248 | orchestrator | 2026-04-04 01:07:04.280341 | orchestrator | 2026-04-04 01:07:04.280353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:07:04.280361 | orchestrator | 2026-04-04 01:07:04.280368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:07:04.280375 | orchestrator | Saturday 04 April 2026 01:06:51 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-04-04 01:07:04.280382 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.280389 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:04.280396 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:04.280402 | orchestrator | 2026-04-04 01:07:04.280409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:07:04.280415 | orchestrator | Saturday 04 April 2026 01:06:51 +0000 (0:00:00.457) 0:00:00.661 ******** 2026-04-04 01:07:04.280421 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-04 01:07:04.280428 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-04 01:07:04.280435 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-04 01:07:04.280441 | orchestrator | 2026-04-04 01:07:04.280447 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-04 01:07:04.280454 | orchestrator | 2026-04-04 01:07:04.280460 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-04 01:07:04.280466 | orchestrator | Saturday 04 April 2026 01:06:52 +0000 (0:00:00.553) 0:00:01.214 ******** 2026-04-04 01:07:04.280472 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.280581 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:04.280590 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:04.280594 | orchestrator | 2026-04-04 01:07:04.280598 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:07:04.280614 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:07:04.280620 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:07:04.280624 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:07:04.280628 | orchestrator | 2026-04-04 01:07:04.280632 | orchestrator | 2026-04-04 01:07:04.280636 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:07:04.280640 | orchestrator | Saturday 04 April 2026 01:06:53 +0000 (0:00:00.986) 0:00:02.201 ******** 2026-04-04 01:07:04.280643 | orchestrator | =============================================================================== 2026-04-04 01:07:04.280647 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.99s 2026-04-04 01:07:04.280651 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-04-04 01:07:04.280655 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-04-04 01:07:04.280659 | orchestrator | 2026-04-04 01:07:04.280662 | orchestrator | 2026-04-04 01:07:04.280666 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:07:04.280670 | orchestrator | 2026-04-04 01:07:04.280674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:07:04.280678 | orchestrator | Saturday 04 April 2026 01:05:45 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-04-04 01:07:04.280681 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.280685 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:04.280689 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:04.280693 | orchestrator | 2026-04-04 01:07:04.280696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:07:04.280700 | orchestrator | Saturday 04 April 2026 01:05:45 +0000 (0:00:00.269) 0:00:00.561 ******** 2026-04-04 01:07:04.280704 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-04 01:07:04.280726 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-04 01:07:04.280732 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-04 01:07:04.280739 | orchestrator | 2026-04-04 01:07:04.280745 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-04 01:07:04.280748 | orchestrator | 2026-04-04 01:07:04.280752 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-04 01:07:04.280756 | orchestrator | Saturday 04 April 2026 01:05:45 +0000 (0:00:00.271) 0:00:00.832 ******** 2026-04-04 01:07:04.280761 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:07:04.280765 | orchestrator | 2026-04-04 01:07:04.280768 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-04 01:07:04.280772 | orchestrator | Saturday 04 April 2026 01:05:46 +0000 (0:00:01.003) 0:00:01.835 ******** 2026-04-04 01:07:04.280779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280834 | orchestrator | 2026-04-04 01:07:04.280838 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-04 01:07:04.280842 | orchestrator | Saturday 04 April 2026 01:05:47 +0000 (0:00:01.077) 0:00:02.913 ******** 2026-04-04 01:07:04.280847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:07:04.280851 | orchestrator | 2026-04-04 01:07:04.280855 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-04 01:07:04.280859 | orchestrator | Saturday 04 April 2026 01:05:48 +0000 (0:00:00.833) 0:00:03.747 ******** 2026-04-04 01:07:04.280863 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:07:04.280867 | orchestrator | 2026-04-04 01:07:04.280871 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-04 01:07:04.280875 | orchestrator | Saturday 04 April 2026 01:05:49 +0000 (0:00:00.475) 0:00:04.222 ******** 2026-04-04 01:07:04.280879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.280899 | orchestrator | 2026-04-04 01:07:04.280903 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-04 01:07:04.280906 | orchestrator | Saturday 04 April 2026 01:05:50 +0000 (0:00:01.695) 0:00:05.917 ******** 2026-04-04 01:07:04.280914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280918 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.280922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280926 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.280930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280934 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.280938 | orchestrator | 2026-04-04 01:07:04.280942 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-04 01:07:04.280946 | orchestrator | Saturday 04 April 2026 01:05:51 +0000 (0:00:00.442) 0:00:06.359 ******** 2026-04-04 01:07:04.280950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280957 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.280966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280970 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.280977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.280983 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.280989 | orchestrator | 2026-04-04 01:07:04.280996 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-04 01:07:04.281002 | orchestrator | Saturday 04 April 2026 01:05:52 +0000 (0:00:00.586) 0:00:06.946 ******** 2026-04-04 01:07:04.281008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281032 | orchestrator | 2026-04-04 01:07:04.281039 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-04 01:07:04.281046 | orchestrator | Saturday 04 April 2026 01:05:53 +0000 (0:00:01.311) 0:00:08.258 ******** 2026-04-04 01:07:04.281058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281089 | orchestrator | 2026-04-04 01:07:04.281093 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-04 01:07:04.281097 | orchestrator | Saturday 04 April 2026 01:05:55 +0000 (0:00:01.692) 0:00:09.950 ******** 2026-04-04 01:07:04.281101 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.281105 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.281109 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.281115 | orchestrator | 2026-04-04 01:07:04.281121 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-04 01:07:04.281127 | orchestrator | Saturday 04 April 2026 01:05:55 +0000 (0:00:00.299) 0:00:10.250 ******** 2026-04-04 01:07:04.281139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:07:04.281145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:07:04.281151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:07:04.281158 | orchestrator | 2026-04-04 01:07:04.281165 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-04 01:07:04.281171 | orchestrator | Saturday 04 April 2026 01:05:56 +0000 (0:00:01.381) 0:00:11.631 ******** 2026-04-04 01:07:04.281177 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:07:04.281184 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:07:04.281191 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:07:04.281196 | orchestrator | 2026-04-04 01:07:04.281202 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-04 01:07:04.281208 | orchestrator | Saturday 04 April 2026 01:05:58 +0000 (0:00:01.442) 0:00:13.073 ******** 2026-04-04 01:07:04.281214 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:07:04.281220 | orchestrator | 2026-04-04 01:07:04.281226 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-04 01:07:04.281232 | orchestrator | Saturday 04 April 2026 01:05:59 +0000 (0:00:00.946) 0:00:14.020 ******** 2026-04-04 01:07:04.281237 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.281243 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:04.281249 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:04.281255 | orchestrator | 2026-04-04 01:07:04.281262 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-04 01:07:04.281267 | orchestrator | Saturday 04 April 2026 01:05:59 +0000 (0:00:00.640) 0:00:14.661 ******** 2026-04-04 01:07:04.281273 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:04.281279 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:07:04.281285 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:07:04.281291 | orchestrator | 2026-04-04 01:07:04.281297 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-04 01:07:04.281307 | orchestrator | Saturday 04 April 2026 01:06:00 +0000 (0:00:01.176) 0:00:15.838 ******** 2026-04-04 01:07:04.281314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:04.281345 | orchestrator | 2026-04-04 01:07:04.281351 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-04 01:07:04.281357 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:01.134) 0:00:16.973 ******** 2026-04-04 01:07:04.281363 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:07:04.281369 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:04.281376 | orchestrator | } 2026-04-04 01:07:04.281382 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:07:04.281389 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:04.281395 | orchestrator | } 2026-04-04 01:07:04.281402 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:07:04.281408 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:04.281413 | orchestrator | } 2026-04-04 01:07:04.281420 | orchestrator | 2026-04-04 01:07:04.281426 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:07:04.281432 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:00.381) 0:00:17.354 ******** 2026-04-04 01:07:04.281439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.281445 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.281456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.281462 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.281473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:04.281491 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.281498 | orchestrator | 2026-04-04 01:07:04.281504 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-04 01:07:04.281510 | orchestrator | Saturday 04 April 2026 01:06:03 +0000 (0:00:01.004) 0:00:18.359 ******** 2026-04-04 01:07:04.281515 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:04.281521 | orchestrator | 2026-04-04 01:07:04.281527 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-04 01:07:04.281533 | orchestrator | Saturday 04 April 2026 01:06:06 +0000 (0:00:03.084) 0:00:21.444 ******** 2026-04-04 01:07:04.281539 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:04.281546 | orchestrator | 2026-04-04 01:07:04.281551 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:07:04.281557 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:02.576) 0:00:24.021 ******** 2026-04-04 01:07:04.281563 | orchestrator | 2026-04-04 01:07:04.281573 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:07:04.281578 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:00.058) 0:00:24.080 ******** 2026-04-04 01:07:04.281584 | orchestrator | 2026-04-04 01:07:04.281590 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:07:04.281596 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:00.062) 0:00:24.142 ******** 2026-04-04 01:07:04.281601 | orchestrator | 2026-04-04 01:07:04.281607 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-04 01:07:04.281613 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:00.087) 0:00:24.230 ******** 2026-04-04 01:07:04.281619 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.281625 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.281630 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:04.281636 | orchestrator | 2026-04-04 01:07:04.281643 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-04 01:07:04.281648 | orchestrator | Saturday 04 April 2026 01:06:11 +0000 (0:00:01.809) 0:00:26.040 ******** 2026-04-04 01:07:04.281655 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.281661 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.281667 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-04 01:07:04.281673 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.281679 | orchestrator | 2026-04-04 01:07:04.281687 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-04 01:07:04.281694 | orchestrator | Saturday 04 April 2026 01:06:25 +0000 (0:00:14.444) 0:00:40.484 ******** 2026-04-04 01:07:04.281700 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.281801 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:07:04.281814 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:07:04.281821 | orchestrator | 2026-04-04 01:07:04.281827 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-04 01:07:04.281833 | orchestrator | Saturday 04 April 2026 01:06:56 +0000 (0:00:31.160) 0:01:11.644 ******** 2026-04-04 01:07:04.281838 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:04.281844 | orchestrator | 2026-04-04 01:07:04.281850 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-04 01:07:04.281855 | orchestrator | Saturday 04 April 2026 01:06:59 +0000 (0:00:02.758) 0:01:14.403 ******** 2026-04-04 01:07:04.281861 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.281867 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:04.281874 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:04.281879 | orchestrator | 2026-04-04 01:07:04.281885 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-04 01:07:04.281900 | orchestrator | Saturday 04 April 2026 01:06:59 +0000 (0:00:00.222) 0:01:14.625 ******** 2026-04-04 01:07:04.281908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-04 01:07:04.281926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-04 01:07:04.281933 | orchestrator | 2026-04-04 01:07:04.281937 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-04 01:07:04.281940 | orchestrator | Saturday 04 April 2026 01:07:02 +0000 (0:00:02.600) 0:01:17.226 ******** 2026-04-04 01:07:04.281944 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:04.281948 | orchestrator | 2026-04-04 01:07:04.281952 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:07:04.281956 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:07:04.281960 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:07:04.281964 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:07:04.281968 | orchestrator | 2026-04-04 01:07:04.281972 | orchestrator | 2026-04-04 01:07:04.281976 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:07:04.281990 | orchestrator | Saturday 04 April 2026 01:07:02 +0000 (0:00:00.478) 0:01:17.704 ******** 2026-04-04 01:07:04.281996 | orchestrator | =============================================================================== 2026-04-04 01:07:04.282001 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.16s 2026-04-04 01:07:04.282007 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 14.44s 2026-04-04 01:07:04.282157 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.08s 2026-04-04 01:07:04.282167 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.76s 2026-04-04 01:07:04.282171 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.60s 2026-04-04 01:07:04.282175 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.58s 2026-04-04 01:07:04.282179 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.81s 2026-04-04 01:07:04.282183 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.70s 2026-04-04 01:07:04.282187 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.69s 2026-04-04 01:07:04.282191 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.44s 2026-04-04 01:07:04.282195 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2026-04-04 01:07:04.282199 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.31s 2026-04-04 01:07:04.282203 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.18s 2026-04-04 01:07:04.282207 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.13s 2026-04-04 01:07:04.282211 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.08s 2026-04-04 01:07:04.282215 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.01s 2026-04-04 01:07:04.282219 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.00s 2026-04-04 01:07:04.282230 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.95s 2026-04-04 01:07:04.282234 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2026-04-04 01:07:04.282238 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.64s 2026-04-04 01:07:04.282243 | orchestrator | 2026-04-04 01:07:04 | INFO  | Task a01fbe0b-44b6-40c3-93e9-46e4c4db907c is in state SUCCESS 2026-04-04 01:07:04.282247 | orchestrator | 2026-04-04 01:07:04 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:04.282255 | orchestrator | 2026-04-04 01:07:04 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:04.284029 | orchestrator | 2026-04-04 01:07:04 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:04.284089 | orchestrator | 2026-04-04 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:07.324740 | orchestrator | 2026-04-04 01:07:07 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:07.325745 | orchestrator | 2026-04-04 01:07:07 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:07.327927 | orchestrator | 2026-04-04 01:07:07 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:07.327951 | orchestrator | 2026-04-04 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:10.382385 | orchestrator | 2026-04-04 01:07:10 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:10.382485 | orchestrator | 2026-04-04 01:07:10 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:10.382495 | orchestrator | 2026-04-04 01:07:10 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:10.382503 | orchestrator | 2026-04-04 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:13.385653 | orchestrator | 2026-04-04 01:07:13 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:13.385894 | orchestrator | 2026-04-04 01:07:13 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:13.386891 | orchestrator | 2026-04-04 01:07:13 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:13.386959 | orchestrator | 2026-04-04 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:16.426674 | orchestrator | 2026-04-04 01:07:16 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:16.427962 | orchestrator | 2026-04-04 01:07:16 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:16.430126 | orchestrator | 2026-04-04 01:07:16 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:16.430176 | orchestrator | 2026-04-04 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:19.475498 | orchestrator | 2026-04-04 01:07:19 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:19.477004 | orchestrator | 2026-04-04 01:07:19 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:19.478654 | orchestrator | 2026-04-04 01:07:19 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:19.478984 | orchestrator | 2026-04-04 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:22.517087 | orchestrator | 2026-04-04 01:07:22 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:22.518422 | orchestrator | 2026-04-04 01:07:22 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:22.519978 | orchestrator | 2026-04-04 01:07:22 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:22.520007 | orchestrator | 2026-04-04 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:25.551111 | orchestrator | 2026-04-04 01:07:25 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:25.551635 | orchestrator | 2026-04-04 01:07:25 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:25.553363 | orchestrator | 2026-04-04 01:07:25 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:25.553408 | orchestrator | 2026-04-04 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:28.577261 | orchestrator | 2026-04-04 01:07:28 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:28.577625 | orchestrator | 2026-04-04 01:07:28 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:28.578264 | orchestrator | 2026-04-04 01:07:28 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:28.578288 | orchestrator | 2026-04-04 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:31.608945 | orchestrator | 2026-04-04 01:07:31 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:31.609211 | orchestrator | 2026-04-04 01:07:31 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:31.610127 | orchestrator | 2026-04-04 01:07:31 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:31.610410 | orchestrator | 2026-04-04 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:34.643393 | orchestrator | 2026-04-04 01:07:34 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:34.646100 | orchestrator | 2026-04-04 01:07:34 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:34.646865 | orchestrator | 2026-04-04 01:07:34 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:34.646882 | orchestrator | 2026-04-04 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:37.689881 | orchestrator | 2026-04-04 01:07:37 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state STARTED 2026-04-04 01:07:37.691078 | orchestrator | 2026-04-04 01:07:37 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:37.693583 | orchestrator | 2026-04-04 01:07:37 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:37.693647 | orchestrator | 2026-04-04 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:40.735851 | orchestrator | 2026-04-04 01:07:40 | INFO  | Task 72da0b0a-bd08-4a1e-b0d9-2c9412f9e59a is in state SUCCESS 2026-04-04 01:07:40.737387 | orchestrator | 2026-04-04 01:07:40.737442 | orchestrator | 2026-04-04 01:07:40.737457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:07:40.737465 | orchestrator | 2026-04-04 01:07:40.737470 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:07:40.737475 | orchestrator | Saturday 04 April 2026 01:05:42 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-04 01:07:40.737479 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:40.737483 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:40.737487 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:40.737491 | orchestrator | 2026-04-04 01:07:40.737495 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:07:40.737510 | orchestrator | Saturday 04 April 2026 01:05:42 +0000 (0:00:00.265) 0:00:00.573 ******** 2026-04-04 01:07:40.737514 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-04 01:07:40.737519 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-04 01:07:40.737522 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-04 01:07:40.737526 | orchestrator | 2026-04-04 01:07:40.737530 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-04 01:07:40.737534 | orchestrator | 2026-04-04 01:07:40.737544 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:07:40.737548 | orchestrator | Saturday 04 April 2026 01:05:43 +0000 (0:00:00.264) 0:00:00.837 ******** 2026-04-04 01:07:40.737552 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:07:40.737559 | orchestrator | 2026-04-04 01:07:40.737568 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-04 01:07:40.737576 | orchestrator | Saturday 04 April 2026 01:05:43 +0000 (0:00:00.578) 0:00:01.416 ******** 2026-04-04 01:07:40.737583 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-04 01:07:40.737589 | orchestrator | 2026-04-04 01:07:40.737595 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-04-04 01:07:40.737602 | orchestrator | Saturday 04 April 2026 01:05:47 +0000 (0:00:03.595) 0:00:05.012 ******** 2026-04-04 01:07:40.737608 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-04 01:07:40.737614 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-04 01:07:40.737618 | orchestrator | 2026-04-04 01:07:40.737622 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-04 01:07:40.737626 | orchestrator | Saturday 04 April 2026 01:05:54 +0000 (0:00:07.503) 0:00:12.515 ******** 2026-04-04 01:07:40.737630 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:07:40.737634 | orchestrator | 2026-04-04 01:07:40.737638 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-04 01:07:40.737641 | orchestrator | Saturday 04 April 2026 01:05:58 +0000 (0:00:03.506) 0:00:16.022 ******** 2026-04-04 01:07:40.737657 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-04 01:07:40.737662 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:07:40.737665 | orchestrator | 2026-04-04 01:07:40.737669 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-04 01:07:40.737673 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:03.635) 0:00:19.657 ******** 2026-04-04 01:07:40.737677 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:07:40.737681 | orchestrator | 2026-04-04 01:07:40.737685 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-04-04 01:07:40.737688 | orchestrator | Saturday 04 April 2026 01:06:06 +0000 (0:00:04.112) 0:00:23.769 ******** 2026-04-04 01:07:40.737692 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-04 01:07:40.737696 | orchestrator | 2026-04-04 01:07:40.737700 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-04 01:07:40.737703 | orchestrator | Saturday 04 April 2026 01:06:10 +0000 (0:00:04.007) 0:00:27.777 ******** 2026-04-04 01:07:40.737707 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.737711 | orchestrator | 2026-04-04 01:07:40.737786 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-04 01:07:40.737831 | orchestrator | Saturday 04 April 2026 01:06:13 +0000 (0:00:03.222) 0:00:30.999 ******** 2026-04-04 01:07:40.737836 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.737840 | orchestrator | 2026-04-04 01:07:40.737844 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-04 01:07:40.737853 | orchestrator | Saturday 04 April 2026 01:06:17 +0000 (0:00:03.906) 0:00:34.906 ******** 2026-04-04 01:07:40.737857 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.737861 | orchestrator | 2026-04-04 01:07:40.737865 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-04 01:07:40.737869 | orchestrator | Saturday 04 April 2026 01:06:20 +0000 (0:00:03.641) 0:00:38.548 ******** 2026-04-04 01:07:40.737883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.737893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.737897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.737902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.737910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.737917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.737921 | orchestrator | 2026-04-04 01:07:40.737925 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-04 01:07:40.737929 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:02.051) 0:00:40.600 ******** 2026-04-04 01:07:40.737933 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.737937 | orchestrator | 2026-04-04 01:07:40.737941 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-04 01:07:40.737947 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.131) 0:00:40.731 ******** 2026-04-04 01:07:40.737951 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.737954 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.737958 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.737962 | orchestrator | 2026-04-04 01:07:40.737966 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-04 01:07:40.737969 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.273) 0:00:41.004 ******** 2026-04-04 01:07:40.738174 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:07:40.738193 | orchestrator | 2026-04-04 01:07:40.738200 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-04 01:07:40.738206 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:00.862) 0:00:41.866 ******** 2026-04-04 01:07:40.738214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738258 | orchestrator | 2026-04-04 01:07:40.738262 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-04 01:07:40.738266 | orchestrator | Saturday 04 April 2026 01:06:27 +0000 (0:00:02.959) 0:00:44.826 ******** 2026-04-04 01:07:40.738270 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:07:40.738274 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:07:40.738277 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:07:40.738281 | orchestrator | 2026-04-04 01:07:40.738285 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:07:40.738289 | orchestrator | Saturday 04 April 2026 01:06:27 +0000 (0:00:00.495) 0:00:45.322 ******** 2026-04-04 01:07:40.738293 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:07:40.738297 | orchestrator | 2026-04-04 01:07:40.738302 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-04 01:07:40.738308 | orchestrator | Saturday 04 April 2026 01:06:28 +0000 (0:00:00.505) 0:00:45.827 ******** 2026-04-04 01:07:40.738318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738372 | orchestrator | 2026-04-04 01:07:40.738378 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-04 01:07:40.738385 | orchestrator | Saturday 04 April 2026 01:06:30 +0000 (0:00:02.408) 0:00:48.236 ******** 2026-04-04 01:07:40.738396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738413 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.738418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738436 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.738440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738449 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.738453 | orchestrator | 2026-04-04 01:07:40.738457 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-04 01:07:40.738461 | orchestrator | Saturday 04 April 2026 01:06:32 +0000 (0:00:02.199) 0:00:50.436 ******** 2026-04-04 01:07:40.738465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738473 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.738480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738494 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.738508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738517 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.738521 | orchestrator | 2026-04-04 01:07:40.738524 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-04 01:07:40.738528 | orchestrator | Saturday 04 April 2026 01:06:33 +0000 (0:00:01.139) 0:00:51.575 ******** 2026-04-04 01:07:40.738535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738583 | orchestrator | 2026-04-04 01:07:40.738589 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-04 01:07:40.738596 | orchestrator | Saturday 04 April 2026 01:06:36 +0000 (0:00:02.460) 0:00:54.035 ******** 2026-04-04 01:07:40.738605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738760 | orchestrator | 2026-04-04 01:07:40.738764 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-04 01:07:40.738768 | orchestrator | Saturday 04 April 2026 01:06:43 +0000 (0:00:06.883) 0:01:00.919 ******** 2026-04-04 01:07:40.738772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738780 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.738789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738804 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.738814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.738823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.738829 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.738835 | orchestrator | 2026-04-04 01:07:40.738841 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-04 01:07:40.738848 | orchestrator | Saturday 04 April 2026 01:06:43 +0000 (0:00:00.657) 0:01:01.576 ******** 2026-04-04 01:07:40.738859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:07:40.738903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:07:40.738931 | orchestrator | 2026-04-04 01:07:40.738938 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-04 01:07:40.738945 | orchestrator | Saturday 04 April 2026 01:06:46 +0000 (0:00:02.306) 0:01:03.883 ******** 2026-04-04 01:07:40.738951 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:07:40.738958 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:40.738964 | orchestrator | } 2026-04-04 01:07:40.738971 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:07:40.738977 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:40.738984 | orchestrator | } 2026-04-04 01:07:40.738990 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:07:40.738997 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:07:40.739003 | orchestrator | } 2026-04-04 01:07:40.739009 | orchestrator | 2026-04-04 01:07:40.739018 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:07:40.739025 | orchestrator | Saturday 04 April 2026 01:06:46 +0000 (0:00:00.242) 0:01:04.126 ******** 2026-04-04 01:07:40.739032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.739039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.739046 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.739052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.739070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.739077 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.739087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:07:40.739095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:07:40.739101 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.739108 | orchestrator | 2026-04-04 01:07:40.739114 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:07:40.739120 | orchestrator | Saturday 04 April 2026 01:06:47 +0000 (0:00:00.874) 0:01:05.000 ******** 2026-04-04 01:07:40.739126 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:07:40.739133 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:07:40.739139 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:07:40.739146 | orchestrator | 2026-04-04 01:07:40.739152 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-04 01:07:40.739158 | orchestrator | Saturday 04 April 2026 01:06:47 +0000 (0:00:00.229) 0:01:05.229 ******** 2026-04-04 01:07:40.739165 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.739171 | orchestrator | 2026-04-04 01:07:40.739177 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-04 01:07:40.739188 | orchestrator | Saturday 04 April 2026 01:06:49 +0000 (0:00:02.067) 0:01:07.297 ******** 2026-04-04 01:07:40.739194 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.739200 | orchestrator | 2026-04-04 01:07:40.739206 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-04 01:07:40.739212 | orchestrator | Saturday 04 April 2026 01:06:51 +0000 (0:00:02.224) 0:01:09.521 ******** 2026-04-04 01:07:40.739218 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.739225 | orchestrator | 2026-04-04 01:07:40.739231 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:07:40.739237 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:17.011) 0:01:26.532 ******** 2026-04-04 01:07:40.739243 | orchestrator | 2026-04-04 01:07:40.739249 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:07:40.739255 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.058) 0:01:26.591 ******** 2026-04-04 01:07:40.739260 | orchestrator | 2026-04-04 01:07:40.739266 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:07:40.739272 | orchestrator | Saturday 04 April 2026 01:07:09 +0000 (0:00:00.059) 0:01:26.650 ******** 2026-04-04 01:07:40.739278 | orchestrator | 2026-04-04 01:07:40.739283 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-04 01:07:40.739289 | orchestrator | Saturday 04 April 2026 01:07:09 +0000 (0:00:00.061) 0:01:26.712 ******** 2026-04-04 01:07:40.739295 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.739301 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:07:40.739306 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:07:40.739313 | orchestrator | 2026-04-04 01:07:40.739319 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-04 01:07:40.739325 | orchestrator | Saturday 04 April 2026 01:07:23 +0000 (0:00:14.443) 0:01:41.156 ******** 2026-04-04 01:07:40.739331 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:07:40.739341 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:07:40.739348 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:07:40.739354 | orchestrator | 2026-04-04 01:07:40.739361 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:07:40.739368 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:07:40.739375 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:07:40.739382 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:07:40.739388 | orchestrator | 2026-04-04 01:07:40.739394 | orchestrator | 2026-04-04 01:07:40.739400 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:07:40.739406 | orchestrator | Saturday 04 April 2026 01:07:38 +0000 (0:00:15.037) 0:01:56.193 ******** 2026-04-04 01:07:40.739416 | orchestrator | =============================================================================== 2026-04-04 01:07:40.739422 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.01s 2026-04-04 01:07:40.739428 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.04s 2026-04-04 01:07:40.739434 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.44s 2026-04-04 01:07:40.739440 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 7.50s 2026-04-04 01:07:40.739447 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.88s 2026-04-04 01:07:40.739454 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.11s 2026-04-04 01:07:40.739460 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.01s 2026-04-04 01:07:40.739467 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.91s 2026-04-04 01:07:40.739479 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.64s 2026-04-04 01:07:40.739486 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.64s 2026-04-04 01:07:40.739492 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.60s 2026-04-04 01:07:40.739498 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2026-04-04 01:07:40.739505 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.22s 2026-04-04 01:07:40.739511 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.96s 2026-04-04 01:07:40.739518 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.46s 2026-04-04 01:07:40.739524 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.41s 2026-04-04 01:07:40.739531 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.31s 2026-04-04 01:07:40.739537 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.22s 2026-04-04 01:07:40.739544 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.20s 2026-04-04 01:07:40.739552 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.07s 2026-04-04 01:07:40.739559 | orchestrator | 2026-04-04 01:07:40 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:40.739565 | orchestrator | 2026-04-04 01:07:40 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:40.739572 | orchestrator | 2026-04-04 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:43.781942 | orchestrator | 2026-04-04 01:07:43 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:43.782787 | orchestrator | 2026-04-04 01:07:43 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:43.782824 | orchestrator | 2026-04-04 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:46.833961 | orchestrator | 2026-04-04 01:07:46 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:46.835105 | orchestrator | 2026-04-04 01:07:46 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:46.835140 | orchestrator | 2026-04-04 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:49.876842 | orchestrator | 2026-04-04 01:07:49 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:49.878484 | orchestrator | 2026-04-04 01:07:49 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:49.878527 | orchestrator | 2026-04-04 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:52.916380 | orchestrator | 2026-04-04 01:07:52 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:52.917957 | orchestrator | 2026-04-04 01:07:52 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:52.918007 | orchestrator | 2026-04-04 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:55.962507 | orchestrator | 2026-04-04 01:07:55 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:55.963734 | orchestrator | 2026-04-04 01:07:55 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:55.963949 | orchestrator | 2026-04-04 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:59.013953 | orchestrator | 2026-04-04 01:07:59 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:07:59.015564 | orchestrator | 2026-04-04 01:07:59 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:07:59.015715 | orchestrator | 2026-04-04 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:02.060470 | orchestrator | 2026-04-04 01:08:02 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:02.060521 | orchestrator | 2026-04-04 01:08:02 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:02.060721 | orchestrator | 2026-04-04 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:05.103109 | orchestrator | 2026-04-04 01:08:05 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:05.104032 | orchestrator | 2026-04-04 01:08:05 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:05.104089 | orchestrator | 2026-04-04 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:08.142147 | orchestrator | 2026-04-04 01:08:08 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:08.144548 | orchestrator | 2026-04-04 01:08:08 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:08.144645 | orchestrator | 2026-04-04 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:11.181757 | orchestrator | 2026-04-04 01:08:11 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:11.182470 | orchestrator | 2026-04-04 01:08:11 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:11.182486 | orchestrator | 2026-04-04 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:14.228427 | orchestrator | 2026-04-04 01:08:14 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:14.230489 | orchestrator | 2026-04-04 01:08:14 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:14.230532 | orchestrator | 2026-04-04 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:17.275446 | orchestrator | 2026-04-04 01:08:17 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:17.277009 | orchestrator | 2026-04-04 01:08:17 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:17.277089 | orchestrator | 2026-04-04 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:20.315800 | orchestrator | 2026-04-04 01:08:20 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:20.317191 | orchestrator | 2026-04-04 01:08:20 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:20.317252 | orchestrator | 2026-04-04 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:23.367951 | orchestrator | 2026-04-04 01:08:23 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:23.370311 | orchestrator | 2026-04-04 01:08:23 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:23.370372 | orchestrator | 2026-04-04 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:26.414448 | orchestrator | 2026-04-04 01:08:26 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:26.416713 | orchestrator | 2026-04-04 01:08:26 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:26.416762 | orchestrator | 2026-04-04 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:29.460943 | orchestrator | 2026-04-04 01:08:29 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:29.463392 | orchestrator | 2026-04-04 01:08:29 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:29.463738 | orchestrator | 2026-04-04 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:32.503948 | orchestrator | 2026-04-04 01:08:32 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:32.504434 | orchestrator | 2026-04-04 01:08:32 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:32.504634 | orchestrator | 2026-04-04 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:35.547492 | orchestrator | 2026-04-04 01:08:35 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:35.548616 | orchestrator | 2026-04-04 01:08:35 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:35.548999 | orchestrator | 2026-04-04 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:38.598767 | orchestrator | 2026-04-04 01:08:38 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:38.600293 | orchestrator | 2026-04-04 01:08:38 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:38.600334 | orchestrator | 2026-04-04 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:41.645249 | orchestrator | 2026-04-04 01:08:41 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:41.646517 | orchestrator | 2026-04-04 01:08:41 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:41.646934 | orchestrator | 2026-04-04 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:44.684483 | orchestrator | 2026-04-04 01:08:44 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:44.686238 | orchestrator | 2026-04-04 01:08:44 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:44.686295 | orchestrator | 2026-04-04 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:47.724269 | orchestrator | 2026-04-04 01:08:47 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:47.725928 | orchestrator | 2026-04-04 01:08:47 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:47.725968 | orchestrator | 2026-04-04 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:50.765306 | orchestrator | 2026-04-04 01:08:50 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:50.767155 | orchestrator | 2026-04-04 01:08:50 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:50.767480 | orchestrator | 2026-04-04 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:53.803168 | orchestrator | 2026-04-04 01:08:53 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:53.804003 | orchestrator | 2026-04-04 01:08:53 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:53.804204 | orchestrator | 2026-04-04 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:56.846981 | orchestrator | 2026-04-04 01:08:56 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:56.848111 | orchestrator | 2026-04-04 01:08:56 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:56.848171 | orchestrator | 2026-04-04 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:59.876550 | orchestrator | 2026-04-04 01:08:59 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:08:59.876605 | orchestrator | 2026-04-04 01:08:59 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:08:59.876618 | orchestrator | 2026-04-04 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:02.909043 | orchestrator | 2026-04-04 01:09:02 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:02.909460 | orchestrator | 2026-04-04 01:09:02 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:02.909486 | orchestrator | 2026-04-04 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:05.944363 | orchestrator | 2026-04-04 01:09:05 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:05.944980 | orchestrator | 2026-04-04 01:09:05 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:05.945057 | orchestrator | 2026-04-04 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:08.989821 | orchestrator | 2026-04-04 01:09:08 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:08.992345 | orchestrator | 2026-04-04 01:09:08 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:08.992698 | orchestrator | 2026-04-04 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:12.041651 | orchestrator | 2026-04-04 01:09:12 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:12.043126 | orchestrator | 2026-04-04 01:09:12 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:12.043186 | orchestrator | 2026-04-04 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:15.086544 | orchestrator | 2026-04-04 01:09:15 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:15.088627 | orchestrator | 2026-04-04 01:09:15 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:15.088677 | orchestrator | 2026-04-04 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:18.120053 | orchestrator | 2026-04-04 01:09:18 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:18.121787 | orchestrator | 2026-04-04 01:09:18 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:18.121840 | orchestrator | 2026-04-04 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:21.158351 | orchestrator | 2026-04-04 01:09:21 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:21.158604 | orchestrator | 2026-04-04 01:09:21 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:21.158620 | orchestrator | 2026-04-04 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:24.184963 | orchestrator | 2026-04-04 01:09:24 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:24.185583 | orchestrator | 2026-04-04 01:09:24 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:24.186171 | orchestrator | 2026-04-04 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:27.230292 | orchestrator | 2026-04-04 01:09:27 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:27.233382 | orchestrator | 2026-04-04 01:09:27 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:27.233453 | orchestrator | 2026-04-04 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:30.262074 | orchestrator | 2026-04-04 01:09:30 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:30.262951 | orchestrator | 2026-04-04 01:09:30 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:30.263003 | orchestrator | 2026-04-04 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:33.310761 | orchestrator | 2026-04-04 01:09:33 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:33.310848 | orchestrator | 2026-04-04 01:09:33 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:33.310857 | orchestrator | 2026-04-04 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:36.361733 | orchestrator | 2026-04-04 01:09:36 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:36.364199 | orchestrator | 2026-04-04 01:09:36 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:36.364260 | orchestrator | 2026-04-04 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:39.400713 | orchestrator | 2026-04-04 01:09:39 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:39.401718 | orchestrator | 2026-04-04 01:09:39 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:39.401776 | orchestrator | 2026-04-04 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:42.438598 | orchestrator | 2026-04-04 01:09:42 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:42.440007 | orchestrator | 2026-04-04 01:09:42 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:42.440041 | orchestrator | 2026-04-04 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:45.487107 | orchestrator | 2026-04-04 01:09:45 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:45.489211 | orchestrator | 2026-04-04 01:09:45 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:45.489278 | orchestrator | 2026-04-04 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:48.535247 | orchestrator | 2026-04-04 01:09:48 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:48.536912 | orchestrator | 2026-04-04 01:09:48 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:48.536961 | orchestrator | 2026-04-04 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:51.591706 | orchestrator | 2026-04-04 01:09:51 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:51.593389 | orchestrator | 2026-04-04 01:09:51 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:51.593485 | orchestrator | 2026-04-04 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:54.642980 | orchestrator | 2026-04-04 01:09:54 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:54.644104 | orchestrator | 2026-04-04 01:09:54 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:54.644165 | orchestrator | 2026-04-04 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:09:57.694632 | orchestrator | 2026-04-04 01:09:57 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:09:57.696297 | orchestrator | 2026-04-04 01:09:57 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:09:57.696352 | orchestrator | 2026-04-04 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:00.739788 | orchestrator | 2026-04-04 01:10:00 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:10:00.742302 | orchestrator | 2026-04-04 01:10:00 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:00.742662 | orchestrator | 2026-04-04 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:03.778321 | orchestrator | 2026-04-04 01:10:03 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state STARTED 2026-04-04 01:10:03.783902 | orchestrator | 2026-04-04 01:10:03 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:03.783983 | orchestrator | 2026-04-04 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:06.836871 | orchestrator | 2026-04-04 01:10:06.836948 | orchestrator | 2026-04-04 01:10:06.836956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:10:06.836961 | orchestrator | 2026-04-04 01:10:06.836966 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-04 01:10:06.836971 | orchestrator | Saturday 04 April 2026 01:00:19 +0000 (0:00:00.699) 0:00:00.699 ******** 2026-04-04 01:10:06.836975 | orchestrator | changed: [testbed-manager] 2026-04-04 01:10:06.836981 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.836985 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.836989 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.836993 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.836997 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.837000 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.837004 | orchestrator | 2026-04-04 01:10:06.837008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:10:06.837013 | orchestrator | Saturday 04 April 2026 01:00:21 +0000 (0:00:02.041) 0:00:02.741 ******** 2026-04-04 01:10:06.837016 | orchestrator | changed: [testbed-manager] 2026-04-04 01:10:06.837020 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837024 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.837028 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.837032 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.837035 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.837039 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.837043 | orchestrator | 2026-04-04 01:10:06.837047 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:10:06.837051 | orchestrator | Saturday 04 April 2026 01:00:23 +0000 (0:00:01.854) 0:00:04.596 ******** 2026-04-04 01:10:06.837055 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-04 01:10:06.837059 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-04 01:10:06.837063 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-04 01:10:06.837067 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-04 01:10:06.837071 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-04 01:10:06.837074 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-04 01:10:06.837078 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-04 01:10:06.837139 | orchestrator | 2026-04-04 01:10:06.837143 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-04 01:10:06.837147 | orchestrator | 2026-04-04 01:10:06.837151 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-04 01:10:06.837198 | orchestrator | Saturday 04 April 2026 01:00:24 +0000 (0:00:01.223) 0:00:05.819 ******** 2026-04-04 01:10:06.837202 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.837223 | orchestrator | 2026-04-04 01:10:06.837227 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-04 01:10:06.837231 | orchestrator | Saturday 04 April 2026 01:00:25 +0000 (0:00:00.618) 0:00:06.438 ******** 2026-04-04 01:10:06.837235 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-04 01:10:06.837240 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-04 01:10:06.837250 | orchestrator | 2026-04-04 01:10:06.837254 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-04 01:10:06.837258 | orchestrator | Saturday 04 April 2026 01:00:29 +0000 (0:00:04.585) 0:00:11.024 ******** 2026-04-04 01:10:06.837262 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:10:06.837266 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:10:06.837657 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837676 | orchestrator | 2026-04-04 01:10:06.837683 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-04 01:10:06.837691 | orchestrator | Saturday 04 April 2026 01:00:34 +0000 (0:00:04.341) 0:00:15.365 ******** 2026-04-04 01:10:06.837697 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837703 | orchestrator | 2026-04-04 01:10:06.837718 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-04 01:10:06.837724 | orchestrator | Saturday 04 April 2026 01:00:34 +0000 (0:00:00.730) 0:00:16.096 ******** 2026-04-04 01:10:06.837728 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837733 | orchestrator | 2026-04-04 01:10:06.837737 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-04 01:10:06.837743 | orchestrator | Saturday 04 April 2026 01:00:36 +0000 (0:00:01.641) 0:00:17.737 ******** 2026-04-04 01:10:06.837749 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837756 | orchestrator | 2026-04-04 01:10:06.837762 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:10:06.837770 | orchestrator | Saturday 04 April 2026 01:00:38 +0000 (0:00:02.431) 0:00:20.168 ******** 2026-04-04 01:10:06.837776 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.837782 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.837788 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.837794 | orchestrator | 2026-04-04 01:10:06.837800 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-04 01:10:06.837806 | orchestrator | Saturday 04 April 2026 01:00:39 +0000 (0:00:00.781) 0:00:20.950 ******** 2026-04-04 01:10:06.837812 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.837819 | orchestrator | 2026-04-04 01:10:06.837825 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-04 01:10:06.837832 | orchestrator | Saturday 04 April 2026 01:01:13 +0000 (0:00:33.660) 0:00:54.610 ******** 2026-04-04 01:10:06.837838 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.837844 | orchestrator | 2026-04-04 01:10:06.837850 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:10:06.837857 | orchestrator | Saturday 04 April 2026 01:01:29 +0000 (0:00:16.559) 0:01:11.169 ******** 2026-04-04 01:10:06.837863 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.837869 | orchestrator | 2026-04-04 01:10:06.837874 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:10:06.837881 | orchestrator | Saturday 04 April 2026 01:01:43 +0000 (0:00:13.949) 0:01:25.119 ******** 2026-04-04 01:10:06.837901 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.837906 | orchestrator | 2026-04-04 01:10:06.837910 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-04 01:10:06.837914 | orchestrator | Saturday 04 April 2026 01:01:44 +0000 (0:00:00.672) 0:01:25.791 ******** 2026-04-04 01:10:06.837918 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.837922 | orchestrator | 2026-04-04 01:10:06.837926 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:10:06.837930 | orchestrator | Saturday 04 April 2026 01:01:45 +0000 (0:00:00.606) 0:01:26.397 ******** 2026-04-04 01:10:06.837944 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.837948 | orchestrator | 2026-04-04 01:10:06.837952 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-04 01:10:06.837956 | orchestrator | Saturday 04 April 2026 01:01:45 +0000 (0:00:00.529) 0:01:26.927 ******** 2026-04-04 01:10:06.837960 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.838239 | orchestrator | 2026-04-04 01:10:06.838251 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-04 01:10:06.838255 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:19.039) 0:01:45.966 ******** 2026-04-04 01:10:06.838259 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838263 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838267 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838271 | orchestrator | 2026-04-04 01:10:06.838275 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-04 01:10:06.838279 | orchestrator | 2026-04-04 01:10:06.838282 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-04 01:10:06.838286 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:00.207) 0:01:46.173 ******** 2026-04-04 01:10:06.838290 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.838294 | orchestrator | 2026-04-04 01:10:06.838298 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-04 01:10:06.838301 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:00.638) 0:01:46.811 ******** 2026-04-04 01:10:06.838305 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838309 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838313 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838316 | orchestrator | 2026-04-04 01:10:06.838320 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-04 01:10:06.838324 | orchestrator | Saturday 04 April 2026 01:02:07 +0000 (0:00:02.023) 0:01:48.834 ******** 2026-04-04 01:10:06.838328 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838332 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838336 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838339 | orchestrator | 2026-04-04 01:10:06.838343 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-04 01:10:06.838347 | orchestrator | Saturday 04 April 2026 01:02:09 +0000 (0:00:02.008) 0:01:50.843 ******** 2026-04-04 01:10:06.838351 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838354 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838358 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838362 | orchestrator | 2026-04-04 01:10:06.838366 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-04 01:10:06.838370 | orchestrator | Saturday 04 April 2026 01:02:10 +0000 (0:00:00.455) 0:01:51.298 ******** 2026-04-04 01:10:06.838374 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 01:10:06.838378 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838382 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 01:10:06.838385 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838389 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-04 01:10:06.838393 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-04 01:10:06.838397 | orchestrator | 2026-04-04 01:10:06.838401 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-04 01:10:06.838411 | orchestrator | Saturday 04 April 2026 01:02:19 +0000 (0:00:09.691) 0:02:00.990 ******** 2026-04-04 01:10:06.838414 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838418 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838422 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838426 | orchestrator | 2026-04-04 01:10:06.838429 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-04 01:10:06.838488 | orchestrator | Saturday 04 April 2026 01:02:19 +0000 (0:00:00.258) 0:02:01.249 ******** 2026-04-04 01:10:06.838493 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 01:10:06.838497 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838501 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 01:10:06.838505 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838509 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 01:10:06.838513 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838517 | orchestrator | 2026-04-04 01:10:06.838521 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-04 01:10:06.838525 | orchestrator | Saturday 04 April 2026 01:02:20 +0000 (0:00:00.792) 0:02:02.041 ******** 2026-04-04 01:10:06.838529 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838533 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838537 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838541 | orchestrator | 2026-04-04 01:10:06.838545 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-04 01:10:06.838549 | orchestrator | Saturday 04 April 2026 01:02:21 +0000 (0:00:00.554) 0:02:02.596 ******** 2026-04-04 01:10:06.838553 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838556 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838560 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838564 | orchestrator | 2026-04-04 01:10:06.838568 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-04 01:10:06.838572 | orchestrator | Saturday 04 April 2026 01:02:22 +0000 (0:00:01.021) 0:02:03.617 ******** 2026-04-04 01:10:06.838576 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838580 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838605 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838610 | orchestrator | 2026-04-04 01:10:06.838614 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-04 01:10:06.838617 | orchestrator | Saturday 04 April 2026 01:02:24 +0000 (0:00:01.910) 0:02:05.528 ******** 2026-04-04 01:10:06.838621 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838625 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838629 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.838633 | orchestrator | 2026-04-04 01:10:06.838636 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:10:06.838640 | orchestrator | Saturday 04 April 2026 01:02:45 +0000 (0:00:20.932) 0:02:26.460 ******** 2026-04-04 01:10:06.838644 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838648 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838651 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.838655 | orchestrator | 2026-04-04 01:10:06.838659 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:10:06.838663 | orchestrator | Saturday 04 April 2026 01:02:58 +0000 (0:00:13.534) 0:02:39.994 ******** 2026-04-04 01:10:06.838667 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:10:06.838670 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838674 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838678 | orchestrator | 2026-04-04 01:10:06.838682 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-04 01:10:06.838685 | orchestrator | Saturday 04 April 2026 01:02:59 +0000 (0:00:01.091) 0:02:41.086 ******** 2026-04-04 01:10:06.838689 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838693 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838697 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.838700 | orchestrator | 2026-04-04 01:10:06.838704 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-04 01:10:06.838708 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:12.168) 0:02:53.254 ******** 2026-04-04 01:10:06.838712 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838716 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838723 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838726 | orchestrator | 2026-04-04 01:10:06.838730 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-04 01:10:06.838734 | orchestrator | Saturday 04 April 2026 01:03:14 +0000 (0:00:02.385) 0:02:55.640 ******** 2026-04-04 01:10:06.838738 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.838742 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.838745 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.838749 | orchestrator | 2026-04-04 01:10:06.838753 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-04 01:10:06.838757 | orchestrator | 2026-04-04 01:10:06.838762 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:10:06.838769 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:00.665) 0:02:56.305 ******** 2026-04-04 01:10:06.838775 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.838782 | orchestrator | 2026-04-04 01:10:06.838788 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-04 01:10:06.838794 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:00.651) 0:02:56.957 ******** 2026-04-04 01:10:06.838800 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-04 01:10:06.838806 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-04 01:10:06.838812 | orchestrator | 2026-04-04 01:10:06.838817 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-04 01:10:06.838823 | orchestrator | Saturday 04 April 2026 01:03:18 +0000 (0:00:03.069) 0:03:00.026 ******** 2026-04-04 01:10:06.838828 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-04 01:10:06.838837 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-04 01:10:06.838842 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-04 01:10:06.838849 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-04 01:10:06.839171 | orchestrator | 2026-04-04 01:10:06.839181 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-04 01:10:06.839186 | orchestrator | Saturday 04 April 2026 01:03:24 +0000 (0:00:06.109) 0:03:06.136 ******** 2026-04-04 01:10:06.839191 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:10:06.839196 | orchestrator | 2026-04-04 01:10:06.839201 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-04 01:10:06.839206 | orchestrator | Saturday 04 April 2026 01:03:27 +0000 (0:00:03.151) 0:03:09.288 ******** 2026-04-04 01:10:06.839210 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-04 01:10:06.839215 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:10:06.839220 | orchestrator | 2026-04-04 01:10:06.839225 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-04 01:10:06.839229 | orchestrator | Saturday 04 April 2026 01:03:31 +0000 (0:00:03.719) 0:03:13.008 ******** 2026-04-04 01:10:06.839233 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:10:06.839238 | orchestrator | 2026-04-04 01:10:06.839243 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-04 01:10:06.839247 | orchestrator | Saturday 04 April 2026 01:03:34 +0000 (0:00:03.221) 0:03:16.229 ******** 2026-04-04 01:10:06.839252 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-04 01:10:06.839256 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-04 01:10:06.839261 | orchestrator | 2026-04-04 01:10:06.839266 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-04 01:10:06.839286 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:06.698) 0:03:22.927 ******** 2026-04-04 01:10:06.839302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839402 | orchestrator | 2026-04-04 01:10:06.839416 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-04 01:10:06.839420 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:02.423) 0:03:25.351 ******** 2026-04-04 01:10:06.839424 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.839428 | orchestrator | 2026-04-04 01:10:06.839432 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-04 01:10:06.839513 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:00.114) 0:03:25.465 ******** 2026-04-04 01:10:06.839517 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.839521 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.839525 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.839529 | orchestrator | 2026-04-04 01:10:06.839532 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-04 01:10:06.839536 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:00.278) 0:03:25.744 ******** 2026-04-04 01:10:06.839540 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:10:06.839544 | orchestrator | 2026-04-04 01:10:06.839548 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-04 01:10:06.839552 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.671) 0:03:26.416 ******** 2026-04-04 01:10:06.839555 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.839559 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.839563 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.839567 | orchestrator | 2026-04-04 01:10:06.839571 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:10:06.839574 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.271) 0:03:26.687 ******** 2026-04-04 01:10:06.839579 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.839583 | orchestrator | 2026-04-04 01:10:06.839587 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-04 01:10:06.839591 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.587) 0:03:27.274 ******** 2026-04-04 01:10:06.839595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.839666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.839678 | orchestrator | 2026-04-04 01:10:06.839682 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-04 01:10:06.839686 | orchestrator | Saturday 04 April 2026 01:03:49 +0000 (0:00:03.944) 0:03:31.219 ******** 2026-04-04 01:10:06.839690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.839709 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.839723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.839741 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.839746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.839771 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.839775 | orchestrator | 2026-04-04 01:10:06.839779 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-04 01:10:06.839971 | orchestrator | Saturday 04 April 2026 01:03:50 +0000 (0:00:00.813) 0:03:32.032 ******** 2026-04-04 01:10:06.839976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.839992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840044 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840068 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840099 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840103 | orchestrator | 2026-04-04 01:10:06.840107 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-04 01:10:06.840111 | orchestrator | Saturday 04 April 2026 01:03:52 +0000 (0:00:01.329) 0:03:33.362 ******** 2026-04-04 01:10:06.840117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840197 | orchestrator | 2026-04-04 01:10:06.840204 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-04 01:10:06.840208 | orchestrator | Saturday 04 April 2026 01:03:55 +0000 (0:00:03.447) 0:03:36.809 ******** 2026-04-04 01:10:06.840217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840293 | orchestrator | 2026-04-04 01:10:06.840297 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-04 01:10:06.840301 | orchestrator | Saturday 04 April 2026 01:04:05 +0000 (0:00:09.718) 0:03:46.528 ******** 2026-04-04 01:10:06.840308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro'2026-04-04 01:10:06 | INFO  | Task 34fefefe-9dc0-4d6c-b7a8-c6220c2571cc is in state SUCCESS 2026-04-04 01:10:06.840460 | orchestrator | , '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840465 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840491 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840526 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840530 | orchestrator | 2026-04-04 01:10:06.840534 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-04 01:10:06.840538 | orchestrator | Saturday 04 April 2026 01:04:06 +0000 (0:00:01.552) 0:03:48.080 ******** 2026-04-04 01:10:06.840542 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840546 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840550 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840553 | orchestrator | 2026-04-04 01:10:06.840557 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-04 01:10:06.840561 | orchestrator | Saturday 04 April 2026 01:04:08 +0000 (0:00:01.254) 0:03:49.335 ******** 2026-04-04 01:10:06.840565 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840569 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840573 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840576 | orchestrator | 2026-04-04 01:10:06.840580 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-04 01:10:06.840584 | orchestrator | Saturday 04 April 2026 01:04:08 +0000 (0:00:00.618) 0:03:49.953 ******** 2026-04-04 01:10:06.840588 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-04 01:10:06.840592 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-04 01:10:06.840598 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-04 01:10:06.840602 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-04 01:10:06.840606 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840610 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840614 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-04 01:10:06.840617 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-04 01:10:06.840621 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840625 | orchestrator | 2026-04-04 01:10:06.840629 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-04 01:10:06.840632 | orchestrator | Saturday 04 April 2026 01:04:08 +0000 (0:00:00.322) 0:03:50.276 ******** 2026-04-04 01:10:06.840636 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-04 01:10:06.840642 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-04 01:10:06.840646 | orchestrator | 2026-04-04 01:10:06.840649 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-04 01:10:06.840653 | orchestrator | Saturday 04 April 2026 01:04:11 +0000 (0:00:02.192) 0:03:52.468 ******** 2026-04-04 01:10:06.840657 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.840661 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.840664 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.840668 | orchestrator | 2026-04-04 01:10:06.840672 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-04 01:10:06.840679 | orchestrator | Saturday 04 April 2026 01:04:13 +0000 (0:00:02.008) 0:03:54.476 ******** 2026-04-04 01:10:06.840683 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.840687 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.840691 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.840694 | orchestrator | 2026-04-04 01:10:06.840712 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-04 01:10:06.840716 | orchestrator | Saturday 04 April 2026 01:04:15 +0000 (0:00:02.352) 0:03:56.829 ******** 2026-04-04 01:10:06.840721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-04 01:10:06.840769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.840796 | orchestrator | 2026-04-04 01:10:06.840812 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-04 01:10:06.840816 | orchestrator | Saturday 04 April 2026 01:04:17 +0000 (0:00:02.350) 0:03:59.180 ******** 2026-04-04 01:10:06.840820 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:10:06.840824 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.840828 | orchestrator | } 2026-04-04 01:10:06.840832 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:10:06.840836 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.840840 | orchestrator | } 2026-04-04 01:10:06.840844 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:10:06.840848 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.840851 | orchestrator | } 2026-04-04 01:10:06.840855 | orchestrator | 2026-04-04 01:10:06.840859 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:10:06.840863 | orchestrator | Saturday 04 April 2026 01:04:18 +0000 (0:00:00.340) 0:03:59.521 ******** 2026-04-04 01:10:06.840867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840887 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.840904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840919 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.840926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-04 01:10:06.840949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.840954 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.840959 | orchestrator | 2026-04-04 01:10:06.840963 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:10:06.840967 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.933) 0:04:00.454 ******** 2026-04-04 01:10:06.840971 | orchestrator | 2026-04-04 01:10:06.840975 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:10:06.840979 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.122) 0:04:00.577 ******** 2026-04-04 01:10:06.840983 | orchestrator | 2026-04-04 01:10:06.840987 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:10:06.840992 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.114) 0:04:00.692 ******** 2026-04-04 01:10:06.840996 | orchestrator | 2026-04-04 01:10:06.841000 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-04 01:10:06.841004 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.117) 0:04:00.810 ******** 2026-04-04 01:10:06.841008 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.841012 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.841016 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.841020 | orchestrator | 2026-04-04 01:10:06.841024 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-04 01:10:06.841029 | orchestrator | Saturday 04 April 2026 01:04:35 +0000 (0:00:16.213) 0:04:17.023 ******** 2026-04-04 01:10:06.841033 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.841037 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.841041 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.841045 | orchestrator | 2026-04-04 01:10:06.841049 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-04 01:10:06.841053 | orchestrator | Saturday 04 April 2026 01:04:46 +0000 (0:00:10.930) 0:04:27.953 ******** 2026-04-04 01:10:06.841057 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.841064 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.841069 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.841074 | orchestrator | 2026-04-04 01:10:06.841079 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-04 01:10:06.841083 | orchestrator | 2026-04-04 01:10:06.841088 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:10:06.841093 | orchestrator | Saturday 04 April 2026 01:04:59 +0000 (0:00:12.755) 0:04:40.709 ******** 2026-04-04 01:10:06.841098 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.841102 | orchestrator | 2026-04-04 01:10:06.841108 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:10:06.841112 | orchestrator | Saturday 04 April 2026 01:05:00 +0000 (0:00:01.295) 0:04:42.004 ******** 2026-04-04 01:10:06.841117 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841122 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.841126 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.841131 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841136 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841145 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841150 | orchestrator | 2026-04-04 01:10:06.841155 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-04 01:10:06.841160 | orchestrator | Saturday 04 April 2026 01:05:01 +0000 (0:00:00.641) 0:04:42.646 ******** 2026-04-04 01:10:06.841165 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.841169 | orchestrator | 2026-04-04 01:10:06.841174 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-04 01:10:06.841179 | orchestrator | Saturday 04 April 2026 01:05:21 +0000 (0:00:20.306) 0:05:02.952 ******** 2026-04-04 01:10:06.841183 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.841188 | orchestrator | 2026-04-04 01:10:06.841193 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-04 01:10:06.841198 | orchestrator | Saturday 04 April 2026 01:05:23 +0000 (0:00:01.355) 0:05:04.307 ******** 2026-04-04 01:10:06.841202 | orchestrator | included: service-image-info for testbed-node-3 2026-04-04 01:10:06.841207 | orchestrator | 2026-04-04 01:10:06.841212 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-04 01:10:06.841217 | orchestrator | Saturday 04 April 2026 01:05:23 +0000 (0:00:00.676) 0:05:04.984 ******** 2026-04-04 01:10:06.841221 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.841226 | orchestrator | 2026-04-04 01:10:06.841231 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-04 01:10:06.841235 | orchestrator | Saturday 04 April 2026 01:05:26 +0000 (0:00:02.943) 0:05:07.928 ******** 2026-04-04 01:10:06.841240 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.841245 | orchestrator | 2026-04-04 01:10:06.841250 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-04 01:10:06.841254 | orchestrator | Saturday 04 April 2026 01:05:28 +0000 (0:00:01.792) 0:05:09.721 ******** 2026-04-04 01:10:06.841259 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841264 | orchestrator | 2026-04-04 01:10:06.841269 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-04 01:10:06.841274 | orchestrator | Saturday 04 April 2026 01:05:30 +0000 (0:00:01.839) 0:05:11.560 ******** 2026-04-04 01:10:06.841294 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841300 | orchestrator | 2026-04-04 01:10:06.841305 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-04 01:10:06.841310 | orchestrator | Saturday 04 April 2026 01:05:32 +0000 (0:00:01.894) 0:05:13.454 ******** 2026-04-04 01:10:06.841314 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841318 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841322 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841326 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.841333 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:10:06.841337 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:10:06.841341 | orchestrator | 2026-04-04 01:10:06.841345 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-04 01:10:06.841350 | orchestrator | Saturday 04 April 2026 01:05:36 +0000 (0:00:04.239) 0:05:17.694 ******** 2026-04-04 01:10:06.841354 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841358 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841362 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841366 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841370 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.841374 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.841378 | orchestrator | 2026-04-04 01:10:06.841382 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-04 01:10:06.841386 | orchestrator | Saturday 04 April 2026 01:05:37 +0000 (0:00:01.452) 0:05:19.146 ******** 2026-04-04 01:10:06.841390 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841394 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.841398 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.841402 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841406 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841410 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841414 | orchestrator | 2026-04-04 01:10:06.841418 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-04 01:10:06.841422 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:01.255) 0:05:20.402 ******** 2026-04-04 01:10:06.841427 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841431 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841454 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841460 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:10:06.841465 | orchestrator | 2026-04-04 01:10:06.841471 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 01:10:06.841476 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:00.682) 0:05:21.084 ******** 2026-04-04 01:10:06.841482 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-04 01:10:06.841488 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-04 01:10:06.841493 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-04 01:10:06.841499 | orchestrator | 2026-04-04 01:10:06.841505 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 01:10:06.841511 | orchestrator | Saturday 04 April 2026 01:05:40 +0000 (0:00:00.751) 0:05:21.835 ******** 2026-04-04 01:10:06.841517 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-04 01:10:06.841523 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-04 01:10:06.841529 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-04 01:10:06.841535 | orchestrator | 2026-04-04 01:10:06.841541 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 01:10:06.841546 | orchestrator | Saturday 04 April 2026 01:05:41 +0000 (0:00:01.097) 0:05:22.933 ******** 2026-04-04 01:10:06.841552 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-04 01:10:06.841558 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.841564 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-04 01:10:06.841571 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.841577 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-04 01:10:06.841583 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.841589 | orchestrator | 2026-04-04 01:10:06.841598 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-04 01:10:06.841605 | orchestrator | Saturday 04 April 2026 01:05:42 +0000 (0:00:00.482) 0:05:23.416 ******** 2026-04-04 01:10:06.841609 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:10:06.841616 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:10:06.841620 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:10:06.841624 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841628 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:10:06.841632 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:10:06.841635 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:10:06.841639 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841643 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:10:06.841647 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:10:06.841650 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841654 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:10:06.841658 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:10:06.841662 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:10:06.841665 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:10:06.841669 | orchestrator | 2026-04-04 01:10:06.841673 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-04 01:10:06.841696 | orchestrator | Saturday 04 April 2026 01:05:43 +0000 (0:00:01.030) 0:05:24.446 ******** 2026-04-04 01:10:06.841700 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841704 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841708 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841711 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.841715 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.841719 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.841723 | orchestrator | 2026-04-04 01:10:06.841726 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-04 01:10:06.841730 | orchestrator | Saturday 04 April 2026 01:05:44 +0000 (0:00:01.086) 0:05:25.533 ******** 2026-04-04 01:10:06.841734 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.841738 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.841741 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.841745 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.841749 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.841752 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.841756 | orchestrator | 2026-04-04 01:10:06.841760 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-04 01:10:06.841764 | orchestrator | Saturday 04 April 2026 01:05:45 +0000 (0:00:01.699) 0:05:27.232 ******** 2026-04-04 01:10:06.841771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.841936 | orchestrator | 2026-04-04 01:10:06.841942 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:10:06.841948 | orchestrator | Saturday 04 April 2026 01:05:48 +0000 (0:00:02.171) 0:05:29.403 ******** 2026-04-04 01:10:06.841954 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:10:06.841961 | orchestrator | 2026-04-04 01:10:06.841967 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-04 01:10:06.841972 | orchestrator | Saturday 04 April 2026 01:05:49 +0000 (0:00:01.062) 0:05:30.465 ******** 2026-04-04 01:10:06.841995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.842208 | orchestrator | 2026-04-04 01:10:06.842214 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-04 01:10:06.842219 | orchestrator | Saturday 04 April 2026 01:05:53 +0000 (0:00:04.139) 0:05:34.605 ******** 2026-04-04 01:10:06.842226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842285 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.842291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842317 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.842340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842352 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.842358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842370 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.842376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842382 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.842391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842404 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.842409 | orchestrator | 2026-04-04 01:10:06.842415 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-04 01:10:06.842421 | orchestrator | Saturday 04 April 2026 01:05:55 +0000 (0:00:02.252) 0:05:36.857 ******** 2026-04-04 01:10:06.842471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842497 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.842507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.842546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.842576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842582 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.842589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842597 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.842625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842631 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.842637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842643 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.842650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.842656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.842663 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.842669 | orchestrator | 2026-04-04 01:10:06.842676 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:10:06.842683 | orchestrator | Saturday 04 April 2026 01:05:58 +0000 (0:00:02.542) 0:05:39.400 ******** 2026-04-04 01:10:06.842688 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.842694 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.842700 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.842709 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:10:06.842715 | orchestrator | 2026-04-04 01:10:06.842721 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-04 01:10:06.842727 | orchestrator | Saturday 04 April 2026 01:05:59 +0000 (0:00:01.065) 0:05:40.466 ******** 2026-04-04 01:10:06.842734 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:10:06.842740 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:10:06.842746 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:10:06.842751 | orchestrator | 2026-04-04 01:10:06.842757 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-04 01:10:06.842770 | orchestrator | Saturday 04 April 2026 01:06:00 +0000 (0:00:01.032) 0:05:41.498 ******** 2026-04-04 01:10:06.842779 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:10:06.842785 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:10:06.842791 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:10:06.842797 | orchestrator | 2026-04-04 01:10:06.842802 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-04 01:10:06.842808 | orchestrator | Saturday 04 April 2026 01:06:01 +0000 (0:00:01.036) 0:05:42.534 ******** 2026-04-04 01:10:06.842815 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.842821 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:10:06.842826 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:10:06.842832 | orchestrator | 2026-04-04 01:10:06.842838 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-04 01:10:06.842844 | orchestrator | Saturday 04 April 2026 01:06:01 +0000 (0:00:00.673) 0:05:43.208 ******** 2026-04-04 01:10:06.842850 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:10:06.842856 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:10:06.842862 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:10:06.842867 | orchestrator | 2026-04-04 01:10:06.842873 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-04 01:10:06.842879 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:00.594) 0:05:43.803 ******** 2026-04-04 01:10:06.842885 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:10:06.842916 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:10:06.842924 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:10:06.842930 | orchestrator | 2026-04-04 01:10:06.842937 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-04 01:10:06.842943 | orchestrator | Saturday 04 April 2026 01:06:03 +0000 (0:00:01.430) 0:05:45.234 ******** 2026-04-04 01:10:06.842949 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:10:06.842955 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:10:06.842961 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:10:06.842966 | orchestrator | 2026-04-04 01:10:06.842972 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-04 01:10:06.842978 | orchestrator | Saturday 04 April 2026 01:06:05 +0000 (0:00:01.308) 0:05:46.543 ******** 2026-04-04 01:10:06.842984 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:10:06.842990 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:10:06.842996 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:10:06.843001 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-04 01:10:06.843007 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-04 01:10:06.843013 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-04 01:10:06.843019 | orchestrator | 2026-04-04 01:10:06.843024 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-04 01:10:06.843030 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:04.384) 0:05:50.927 ******** 2026-04-04 01:10:06.843036 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843041 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.843047 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.843053 | orchestrator | 2026-04-04 01:10:06.843059 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-04 01:10:06.843066 | orchestrator | Saturday 04 April 2026 01:06:10 +0000 (0:00:00.494) 0:05:51.422 ******** 2026-04-04 01:10:06.843071 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843076 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.843082 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.843088 | orchestrator | 2026-04-04 01:10:06.843093 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-04 01:10:06.843106 | orchestrator | Saturday 04 April 2026 01:06:10 +0000 (0:00:00.404) 0:05:51.826 ******** 2026-04-04 01:10:06.843112 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.843118 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.843124 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.843130 | orchestrator | 2026-04-04 01:10:06.843136 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-04 01:10:06.843142 | orchestrator | Saturday 04 April 2026 01:06:11 +0000 (0:00:01.429) 0:05:53.256 ******** 2026-04-04 01:10:06.843149 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-04 01:10:06.843156 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-04 01:10:06.843162 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-04 01:10:06.843173 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-04 01:10:06.843179 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-04 01:10:06.843185 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-04 01:10:06.843190 | orchestrator | 2026-04-04 01:10:06.843196 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-04 01:10:06.843202 | orchestrator | Saturday 04 April 2026 01:06:15 +0000 (0:00:03.355) 0:05:56.612 ******** 2026-04-04 01:10:06.843208 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 01:10:06.843213 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 01:10:06.843220 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 01:10:06.843226 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 01:10:06.843233 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.843238 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 01:10:06.843244 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.843250 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 01:10:06.843256 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.843261 | orchestrator | 2026-04-04 01:10:06.843268 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-04 01:10:06.843274 | orchestrator | Saturday 04 April 2026 01:06:18 +0000 (0:00:03.300) 0:05:59.912 ******** 2026-04-04 01:10:06.843280 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.843287 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.843293 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.843322 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:10:06.843330 | orchestrator | 2026-04-04 01:10:06.843336 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-04 01:10:06.843341 | orchestrator | Saturday 04 April 2026 01:06:20 +0000 (0:00:02.181) 0:06:02.094 ******** 2026-04-04 01:10:06.843347 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:10:06.843352 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:10:06.843358 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:10:06.843364 | orchestrator | 2026-04-04 01:10:06.843369 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-04 01:10:06.843381 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:01.200) 0:06:03.294 ******** 2026-04-04 01:10:06.843387 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843392 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.843397 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.843402 | orchestrator | 2026-04-04 01:10:06.843408 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-04 01:10:06.843414 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:00.283) 0:06:03.578 ******** 2026-04-04 01:10:06.843420 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843426 | orchestrator | 2026-04-04 01:10:06.843431 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-04 01:10:06.843490 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:00.132) 0:06:03.710 ******** 2026-04-04 01:10:06.843496 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843502 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.843507 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.843513 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.843518 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.843524 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.843530 | orchestrator | 2026-04-04 01:10:06.843536 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-04 01:10:06.843542 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.744) 0:06:04.455 ******** 2026-04-04 01:10:06.843548 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:10:06.843555 | orchestrator | 2026-04-04 01:10:06.843560 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-04 01:10:06.843566 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.727) 0:06:05.182 ******** 2026-04-04 01:10:06.843572 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.843578 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.843584 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.843590 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.843597 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.843603 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.843609 | orchestrator | 2026-04-04 01:10:06.843615 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-04 01:10:06.843621 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:00.539) 0:06:05.722 ******** 2026-04-04 01:10:06.843636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843851 | orchestrator | 2026-04-04 01:10:06.843857 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-04 01:10:06.843863 | orchestrator | Saturday 04 April 2026 01:06:29 +0000 (0:00:04.900) 0:06:10.622 ******** 2026-04-04 01:10:06.843870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.843877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.843885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.843908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.843920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.843948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.843955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.843989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844025 | orchestrator | 2026-04-04 01:10:06.844032 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-04 01:10:06.844039 | orchestrator | Saturday 04 April 2026 01:06:36 +0000 (0:00:07.262) 0:06:17.884 ******** 2026-04-04 01:10:06.844043 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844052 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844056 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844059 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844063 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844073 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844077 | orchestrator | 2026-04-04 01:10:06.844081 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-04 01:10:06.844084 | orchestrator | Saturday 04 April 2026 01:06:38 +0000 (0:00:01.856) 0:06:19.741 ******** 2026-04-04 01:10:06.844088 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:10:06.844092 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:10:06.844096 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:10:06.844100 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:10:06.844105 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844109 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:10:06.844112 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:10:06.844116 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:10:06.844120 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:10:06.844124 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844128 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:10:06.844131 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844135 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:10:06.844139 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:10:06.844149 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:10:06.844153 | orchestrator | 2026-04-04 01:10:06.844157 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-04 01:10:06.844161 | orchestrator | Saturday 04 April 2026 01:06:42 +0000 (0:00:04.546) 0:06:24.288 ******** 2026-04-04 01:10:06.844165 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844168 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844172 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844176 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844180 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844183 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844187 | orchestrator | 2026-04-04 01:10:06.844191 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-04 01:10:06.844195 | orchestrator | Saturday 04 April 2026 01:06:43 +0000 (0:00:00.532) 0:06:24.820 ******** 2026-04-04 01:10:06.844199 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:10:06.844203 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:10:06.844207 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:10:06.844211 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:10:06.844214 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844218 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:10:06.844226 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844230 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:10:06.844234 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844237 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844241 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844245 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844249 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844253 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:10:06.844256 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844260 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844264 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844268 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844272 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844278 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844282 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:10:06.844286 | orchestrator | 2026-04-04 01:10:06.844290 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-04 01:10:06.844293 | orchestrator | Saturday 04 April 2026 01:06:48 +0000 (0:00:04.744) 0:06:29.565 ******** 2026-04-04 01:10:06.844297 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:10:06.844301 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:10:06.844305 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:10:06.844308 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:10:06.844312 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:10:06.844316 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:10:06.844320 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:10:06.844324 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:10:06.844327 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:10:06.844331 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:10:06.844335 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:10:06.844341 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:10:06.844347 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:10:06.844353 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:10:06.844358 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:10:06.844364 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844376 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:10:06.844382 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844389 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:10:06.844395 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844401 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:10:06.844407 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:10:06.844413 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:10:06.844419 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:10:06.844425 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:10:06.844431 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:10:06.844454 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:10:06.844460 | orchestrator | 2026-04-04 01:10:06.844466 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-04 01:10:06.844472 | orchestrator | Saturday 04 April 2026 01:06:55 +0000 (0:00:06.819) 0:06:36.384 ******** 2026-04-04 01:10:06.844478 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844483 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844489 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844495 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844500 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844506 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844512 | orchestrator | 2026-04-04 01:10:06.844518 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-04 01:10:06.844524 | orchestrator | Saturday 04 April 2026 01:06:55 +0000 (0:00:00.502) 0:06:36.887 ******** 2026-04-04 01:10:06.844530 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844536 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844541 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844547 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844552 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844558 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844565 | orchestrator | 2026-04-04 01:10:06.844571 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-04 01:10:06.844578 | orchestrator | Saturday 04 April 2026 01:06:56 +0000 (0:00:00.540) 0:06:37.427 ******** 2026-04-04 01:10:06.844584 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844591 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844597 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844603 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.844609 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.844615 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.844621 | orchestrator | 2026-04-04 01:10:06.844627 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-04 01:10:06.844632 | orchestrator | Saturday 04 April 2026 01:06:58 +0000 (0:00:02.101) 0:06:39.529 ******** 2026-04-04 01:10:06.844639 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844647 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844656 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844660 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.844664 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.844668 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.844671 | orchestrator | 2026-04-04 01:10:06.844675 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-04 01:10:06.844679 | orchestrator | Saturday 04 April 2026 01:07:00 +0000 (0:00:01.892) 0:06:41.421 ******** 2026-04-04 01:10:06.844689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.844702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.844706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844711 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.844719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.844725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844733 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.844745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.844749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844753 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.844765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844773 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.844781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844785 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.844796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.844800 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844804 | orchestrator | 2026-04-04 01:10:06.844808 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-04 01:10:06.844811 | orchestrator | Saturday 04 April 2026 01:07:01 +0000 (0:00:01.235) 0:06:42.657 ******** 2026-04-04 01:10:06.844816 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-04 01:10:06.844820 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844823 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.844827 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-04 01:10:06.844831 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844835 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.844838 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-04 01:10:06.844842 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844846 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.844850 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-04 01:10:06.844857 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844861 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.844865 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-04 01:10:06.844868 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844872 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.844876 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-04 01:10:06.844880 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-04 01:10:06.844884 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.844887 | orchestrator | 2026-04-04 01:10:06.844891 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-04 01:10:06.844895 | orchestrator | Saturday 04 April 2026 01:07:02 +0000 (0:00:00.877) 0:06:43.534 ******** 2026-04-04 01:10:06.844902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:10:06.844988 | orchestrator | 2026-04-04 01:10:06.844992 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-04 01:10:06.844996 | orchestrator | Saturday 04 April 2026 01:07:05 +0000 (0:00:03.274) 0:06:46.808 ******** 2026-04-04 01:10:06.844999 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 01:10:06.845003 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845007 | orchestrator | } 2026-04-04 01:10:06.845011 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 01:10:06.845015 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845019 | orchestrator | } 2026-04-04 01:10:06.845025 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 01:10:06.845029 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845033 | orchestrator | } 2026-04-04 01:10:06.845037 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:10:06.845041 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845044 | orchestrator | } 2026-04-04 01:10:06.845048 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:10:06.845052 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845056 | orchestrator | } 2026-04-04 01:10:06.845060 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:10:06.845064 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:10:06.845067 | orchestrator | } 2026-04-04 01:10:06.845071 | orchestrator | 2026-04-04 01:10:06.845075 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:10:06.845079 | orchestrator | Saturday 04 April 2026 01:07:06 +0000 (0:00:00.668) 0:06:47.477 ******** 2026-04-04 01:10:06.845083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.845090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.845094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845098 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.845113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.845117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845121 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:10:06.845131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:10:06.845138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845142 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.845153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845157 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.845165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845169 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:10:06.845180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:10:06.845184 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845188 | orchestrator | 2026-04-04 01:10:06.845192 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:10:06.845198 | orchestrator | Saturday 04 April 2026 01:07:07 +0000 (0:00:01.599) 0:06:49.076 ******** 2026-04-04 01:10:06.845202 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845208 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845212 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845216 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845220 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845223 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845227 | orchestrator | 2026-04-04 01:10:06.845231 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845235 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.490) 0:06:49.566 ******** 2026-04-04 01:10:06.845238 | orchestrator | 2026-04-04 01:10:06.845242 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845246 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.134) 0:06:49.701 ******** 2026-04-04 01:10:06.845250 | orchestrator | 2026-04-04 01:10:06.845253 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845257 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.120) 0:06:49.821 ******** 2026-04-04 01:10:06.845261 | orchestrator | 2026-04-04 01:10:06.845265 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845268 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.238) 0:06:50.059 ******** 2026-04-04 01:10:06.845272 | orchestrator | 2026-04-04 01:10:06.845276 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845280 | orchestrator | Saturday 04 April 2026 01:07:08 +0000 (0:00:00.120) 0:06:50.180 ******** 2026-04-04 01:10:06.845283 | orchestrator | 2026-04-04 01:10:06.845287 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:10:06.845291 | orchestrator | Saturday 04 April 2026 01:07:09 +0000 (0:00:00.132) 0:06:50.312 ******** 2026-04-04 01:10:06.845295 | orchestrator | 2026-04-04 01:10:06.845298 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-04 01:10:06.845302 | orchestrator | Saturday 04 April 2026 01:07:09 +0000 (0:00:00.116) 0:06:50.429 ******** 2026-04-04 01:10:06.845306 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.845310 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.845313 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.845317 | orchestrator | 2026-04-04 01:10:06.845321 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-04 01:10:06.845325 | orchestrator | Saturday 04 April 2026 01:07:23 +0000 (0:00:14.371) 0:07:04.800 ******** 2026-04-04 01:10:06.845329 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.845333 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.845336 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.845340 | orchestrator | 2026-04-04 01:10:06.845344 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-04 01:10:06.845348 | orchestrator | Saturday 04 April 2026 01:07:42 +0000 (0:00:18.616) 0:07:23.417 ******** 2026-04-04 01:10:06.845351 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.845355 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.845359 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.845363 | orchestrator | 2026-04-04 01:10:06.845366 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-04 01:10:06.845370 | orchestrator | Saturday 04 April 2026 01:08:03 +0000 (0:00:21.442) 0:07:44.859 ******** 2026-04-04 01:10:06.845374 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.845378 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.845381 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.845385 | orchestrator | 2026-04-04 01:10:06.845389 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-04 01:10:06.845393 | orchestrator | Saturday 04 April 2026 01:08:34 +0000 (0:00:30.449) 0:08:15.309 ******** 2026-04-04 01:10:06.845396 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.845408 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.845411 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.845415 | orchestrator | 2026-04-04 01:10:06.845419 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-04 01:10:06.845423 | orchestrator | Saturday 04 April 2026 01:08:34 +0000 (0:00:00.723) 0:08:16.033 ******** 2026-04-04 01:10:06.845429 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.845450 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.845457 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.845463 | orchestrator | 2026-04-04 01:10:06.845470 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-04 01:10:06.845475 | orchestrator | Saturday 04 April 2026 01:08:35 +0000 (0:00:00.857) 0:08:16.890 ******** 2026-04-04 01:10:06.845479 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:10:06.845483 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:10:06.845487 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:10:06.845490 | orchestrator | 2026-04-04 01:10:06.845494 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-04 01:10:06.845498 | orchestrator | Saturday 04 April 2026 01:08:57 +0000 (0:00:21.980) 0:08:38.871 ******** 2026-04-04 01:10:06.845502 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845506 | orchestrator | 2026-04-04 01:10:06.845509 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-04 01:10:06.845513 | orchestrator | Saturday 04 April 2026 01:08:57 +0000 (0:00:00.112) 0:08:38.984 ******** 2026-04-04 01:10:06.845517 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845521 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845525 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845528 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845532 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845536 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-04 01:10:06.845541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:10:06.845545 | orchestrator | 2026-04-04 01:10:06.845549 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-04 01:10:06.845553 | orchestrator | Saturday 04 April 2026 01:09:18 +0000 (0:00:20.578) 0:08:59.563 ******** 2026-04-04 01:10:06.845556 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845560 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845566 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845570 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845574 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845578 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845582 | orchestrator | 2026-04-04 01:10:06.845585 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-04 01:10:06.845589 | orchestrator | Saturday 04 April 2026 01:09:26 +0000 (0:00:08.143) 0:09:07.706 ******** 2026-04-04 01:10:06.845593 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845597 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845600 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845604 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845608 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845612 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-04 01:10:06.845615 | orchestrator | 2026-04-04 01:10:06.845619 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:10:06.845623 | orchestrator | Saturday 04 April 2026 01:09:30 +0000 (0:00:03.671) 0:09:11.377 ******** 2026-04-04 01:10:06.845627 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:10:06.845631 | orchestrator | 2026-04-04 01:10:06.845634 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:10:06.845638 | orchestrator | Saturday 04 April 2026 01:09:45 +0000 (0:00:15.001) 0:09:26.379 ******** 2026-04-04 01:10:06.845646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:10:06.845650 | orchestrator | 2026-04-04 01:10:06.845653 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-04 01:10:06.845657 | orchestrator | Saturday 04 April 2026 01:09:46 +0000 (0:00:01.281) 0:09:27.660 ******** 2026-04-04 01:10:06.845661 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845665 | orchestrator | 2026-04-04 01:10:06.845668 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-04 01:10:06.845672 | orchestrator | Saturday 04 April 2026 01:09:47 +0000 (0:00:01.278) 0:09:28.939 ******** 2026-04-04 01:10:06.845676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:10:06.845680 | orchestrator | 2026-04-04 01:10:06.845684 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-04 01:10:06.845687 | orchestrator | 2026-04-04 01:10:06.845691 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-04 01:10:06.845695 | orchestrator | Saturday 04 April 2026 01:10:01 +0000 (0:00:13.771) 0:09:42.710 ******** 2026-04-04 01:10:06.845698 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:10:06.845702 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:10:06.845706 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:10:06.845710 | orchestrator | 2026-04-04 01:10:06.845714 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-04 01:10:06.845717 | orchestrator | 2026-04-04 01:10:06.845721 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-04 01:10:06.845725 | orchestrator | Saturday 04 April 2026 01:10:02 +0000 (0:00:01.182) 0:09:43.893 ******** 2026-04-04 01:10:06.845729 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845732 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845736 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.845740 | orchestrator | 2026-04-04 01:10:06.845743 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-04 01:10:06.845747 | orchestrator | 2026-04-04 01:10:06.845751 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-04 01:10:06.845755 | orchestrator | Saturday 04 April 2026 01:10:03 +0000 (0:00:00.509) 0:09:44.402 ******** 2026-04-04 01:10:06.845759 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-04 01:10:06.845763 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-04 01:10:06.845769 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845776 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-04 01:10:06.845789 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-04 01:10:06.845801 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.845808 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-04 01:10:06.845813 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-04 01:10:06.845818 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845824 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-04 01:10:06.845829 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:10:06.845835 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-04 01:10:06.845841 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.845846 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-04 01:10:06.845854 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-04 01:10:06.845860 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845866 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-04 01:10:06.845872 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:10:06.845878 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-04 01:10:06.845889 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.845895 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-04 01:10:06.845901 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-04 01:10:06.845907 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845913 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-04 01:10:06.845919 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-04 01:10:06.845924 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.845934 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:10:06.845940 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-04 01:10:06.845946 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-04 01:10:06.845952 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845958 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-04 01:10:06.845964 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.845971 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-04 01:10:06.845974 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.845978 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.845982 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-04 01:10:06.845986 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-04 01:10:06.845990 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-04 01:10:06.845993 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-04 01:10:06.845997 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-04 01:10:06.846001 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-04 01:10:06.846005 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.846008 | orchestrator | 2026-04-04 01:10:06.846042 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-04 01:10:06.846047 | orchestrator | 2026-04-04 01:10:06.846051 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-04 01:10:06.846055 | orchestrator | Saturday 04 April 2026 01:10:04 +0000 (0:00:01.274) 0:09:45.677 ******** 2026-04-04 01:10:06.846058 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-04 01:10:06.846062 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-04 01:10:06.846066 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.846070 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-04 01:10:06.846074 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-04 01:10:06.846077 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.846081 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-04 01:10:06.846085 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-04 01:10:06.846089 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.846092 | orchestrator | 2026-04-04 01:10:06.846096 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-04 01:10:06.846100 | orchestrator | 2026-04-04 01:10:06.846104 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-04 01:10:06.846108 | orchestrator | Saturday 04 April 2026 01:10:05 +0000 (0:00:00.730) 0:09:46.407 ******** 2026-04-04 01:10:06.846111 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.846115 | orchestrator | 2026-04-04 01:10:06.846119 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-04 01:10:06.846123 | orchestrator | 2026-04-04 01:10:06.846127 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-04 01:10:06.846130 | orchestrator | Saturday 04 April 2026 01:10:05 +0000 (0:00:00.783) 0:09:47.190 ******** 2026-04-04 01:10:06.846138 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:10:06.846142 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:10:06.846145 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:10:06.846149 | orchestrator | 2026-04-04 01:10:06.846153 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:10:06.846157 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:10:06.846165 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-04 01:10:06.846169 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-04 01:10:06.846173 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-04 01:10:06.846177 | orchestrator | testbed-node-3 : ok=52  changed=30  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-04 01:10:06.846181 | orchestrator | testbed-node-4 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-04 01:10:06.846184 | orchestrator | testbed-node-5 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-04 01:10:06.846188 | orchestrator | 2026-04-04 01:10:06.846192 | orchestrator | 2026-04-04 01:10:06.846196 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:10:06.846200 | orchestrator | Saturday 04 April 2026 01:10:06 +0000 (0:00:00.451) 0:09:47.642 ******** 2026-04-04 01:10:06.846203 | orchestrator | =============================================================================== 2026-04-04 01:10:06.846207 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.66s 2026-04-04 01:10:06.846211 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.45s 2026-04-04 01:10:06.846215 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.98s 2026-04-04 01:10:06.846219 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.44s 2026-04-04 01:10:06.846227 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.93s 2026-04-04 01:10:06.846231 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.58s 2026-04-04 01:10:06.846234 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 20.31s 2026-04-04 01:10:06.846238 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.04s 2026-04-04 01:10:06.846242 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.62s 2026-04-04 01:10:06.846246 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.56s 2026-04-04 01:10:06.846250 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.21s 2026-04-04 01:10:06.846254 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.00s 2026-04-04 01:10:06.846258 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 14.37s 2026-04-04 01:10:06.846261 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.95s 2026-04-04 01:10:06.846265 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.77s 2026-04-04 01:10:06.846269 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.53s 2026-04-04 01:10:06.846273 | orchestrator | nova : Restart nova-metadata container --------------------------------- 12.76s 2026-04-04 01:10:06.846277 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.17s 2026-04-04 01:10:06.846281 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.93s 2026-04-04 01:10:06.846290 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.72s 2026-04-04 01:10:06.846293 | orchestrator | 2026-04-04 01:10:06 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:06.846297 | orchestrator | 2026-04-04 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:09.877917 | orchestrator | 2026-04-04 01:10:09 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:09.878009 | orchestrator | 2026-04-04 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:12.918927 | orchestrator | 2026-04-04 01:10:12 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:12.919001 | orchestrator | 2026-04-04 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:15.960519 | orchestrator | 2026-04-04 01:10:15 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:15.960593 | orchestrator | 2026-04-04 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:18.998952 | orchestrator | 2026-04-04 01:10:18 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:18.999058 | orchestrator | 2026-04-04 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:22.040119 | orchestrator | 2026-04-04 01:10:22 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:22.041014 | orchestrator | 2026-04-04 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:25.085303 | orchestrator | 2026-04-04 01:10:25 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:25.085375 | orchestrator | 2026-04-04 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:28.126352 | orchestrator | 2026-04-04 01:10:28 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:28.127033 | orchestrator | 2026-04-04 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:31.176197 | orchestrator | 2026-04-04 01:10:31 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:31.176253 | orchestrator | 2026-04-04 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:34.226426 | orchestrator | 2026-04-04 01:10:34 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:34.226529 | orchestrator | 2026-04-04 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:37.279877 | orchestrator | 2026-04-04 01:10:37 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:37.279927 | orchestrator | 2026-04-04 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:40.319303 | orchestrator | 2026-04-04 01:10:40 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:40.319362 | orchestrator | 2026-04-04 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:43.370547 | orchestrator | 2026-04-04 01:10:43 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:43.370616 | orchestrator | 2026-04-04 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:46.412732 | orchestrator | 2026-04-04 01:10:46 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:46.412820 | orchestrator | 2026-04-04 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:49.453368 | orchestrator | 2026-04-04 01:10:49 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:49.453623 | orchestrator | 2026-04-04 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:52.497949 | orchestrator | 2026-04-04 01:10:52 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:52.498006 | orchestrator | 2026-04-04 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:55.540432 | orchestrator | 2026-04-04 01:10:55 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:55.540520 | orchestrator | 2026-04-04 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:10:58.587126 | orchestrator | 2026-04-04 01:10:58 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:10:58.587173 | orchestrator | 2026-04-04 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:01.627760 | orchestrator | 2026-04-04 01:11:01 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:01.627811 | orchestrator | 2026-04-04 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:04.669757 | orchestrator | 2026-04-04 01:11:04 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:04.669813 | orchestrator | 2026-04-04 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:07.714299 | orchestrator | 2026-04-04 01:11:07 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:07.714388 | orchestrator | 2026-04-04 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:10.769103 | orchestrator | 2026-04-04 01:11:10 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:10.769151 | orchestrator | 2026-04-04 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:13.817239 | orchestrator | 2026-04-04 01:11:13 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:13.817293 | orchestrator | 2026-04-04 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:16.857706 | orchestrator | 2026-04-04 01:11:16 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:16.857804 | orchestrator | 2026-04-04 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:19.899674 | orchestrator | 2026-04-04 01:11:19 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:19.899754 | orchestrator | 2026-04-04 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:22.947995 | orchestrator | 2026-04-04 01:11:22 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:22.948083 | orchestrator | 2026-04-04 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:25.992815 | orchestrator | 2026-04-04 01:11:25 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:25.992881 | orchestrator | 2026-04-04 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:29.027822 | orchestrator | 2026-04-04 01:11:29 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:29.027896 | orchestrator | 2026-04-04 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:32.070646 | orchestrator | 2026-04-04 01:11:32 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:32.070699 | orchestrator | 2026-04-04 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:35.122274 | orchestrator | 2026-04-04 01:11:35 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:35.122351 | orchestrator | 2026-04-04 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:38.167449 | orchestrator | 2026-04-04 01:11:38 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state STARTED 2026-04-04 01:11:38.167556 | orchestrator | 2026-04-04 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:11:41.216875 | orchestrator | 2026-04-04 01:11:41.216981 | orchestrator | 2026-04-04 01:11:41.216994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:11:41.217004 | orchestrator | 2026-04-04 01:11:41.217279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:11:41.217294 | orchestrator | Saturday 04 April 2026 01:06:56 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-04-04 01:11:41.217302 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.217311 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:11:41.217319 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:11:41.217327 | orchestrator | 2026-04-04 01:11:41.217335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:11:41.217344 | orchestrator | Saturday 04 April 2026 01:06:56 +0000 (0:00:00.382) 0:00:00.676 ******** 2026-04-04 01:11:41.217352 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-04 01:11:41.217361 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-04 01:11:41.217369 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-04 01:11:41.217377 | orchestrator | 2026-04-04 01:11:41.217385 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-04 01:11:41.217393 | orchestrator | 2026-04-04 01:11:41.217401 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.217409 | orchestrator | Saturday 04 April 2026 01:06:57 +0000 (0:00:00.392) 0:00:01.068 ******** 2026-04-04 01:11:41.217417 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:11:41.217426 | orchestrator | 2026-04-04 01:11:41.217434 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-04-04 01:11:41.217442 | orchestrator | Saturday 04 April 2026 01:06:57 +0000 (0:00:00.598) 0:00:01.666 ******** 2026-04-04 01:11:41.217450 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-04 01:11:41.217459 | orchestrator | 2026-04-04 01:11:41.217466 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-04-04 01:11:41.217541 | orchestrator | Saturday 04 April 2026 01:07:01 +0000 (0:00:04.269) 0:00:05.935 ******** 2026-04-04 01:11:41.217553 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-04 01:11:41.217561 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-04 01:11:41.217569 | orchestrator | 2026-04-04 01:11:41.217577 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-04 01:11:41.217585 | orchestrator | Saturday 04 April 2026 01:07:09 +0000 (0:00:07.615) 0:00:13.551 ******** 2026-04-04 01:11:41.217593 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:11:41.217601 | orchestrator | 2026-04-04 01:11:41.217610 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-04 01:11:41.217618 | orchestrator | Saturday 04 April 2026 01:07:13 +0000 (0:00:03.783) 0:00:17.335 ******** 2026-04-04 01:11:41.217626 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-04 01:11:41.217634 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-04 01:11:41.217642 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:11:41.217650 | orchestrator | 2026-04-04 01:11:41.217664 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-04 01:11:41.217681 | orchestrator | Saturday 04 April 2026 01:07:22 +0000 (0:00:09.261) 0:00:26.597 ******** 2026-04-04 01:11:41.217727 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:11:41.217743 | orchestrator | 2026-04-04 01:11:41.217756 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-04-04 01:11:41.217770 | orchestrator | Saturday 04 April 2026 01:07:26 +0000 (0:00:04.072) 0:00:30.669 ******** 2026-04-04 01:11:41.217782 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-04 01:11:41.217796 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-04 01:11:41.217809 | orchestrator | 2026-04-04 01:11:41.218312 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-04 01:11:41.218324 | orchestrator | Saturday 04 April 2026 01:07:33 +0000 (0:00:06.910) 0:00:37.580 ******** 2026-04-04 01:11:41.218345 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-04 01:11:41.218354 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-04 01:11:41.218362 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-04 01:11:41.218370 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-04 01:11:41.218378 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-04 01:11:41.218386 | orchestrator | 2026-04-04 01:11:41.218394 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.218402 | orchestrator | Saturday 04 April 2026 01:07:51 +0000 (0:00:17.430) 0:00:55.011 ******** 2026-04-04 01:11:41.218410 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:11:41.218418 | orchestrator | 2026-04-04 01:11:41.218426 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-04 01:11:41.218434 | orchestrator | Saturday 04 April 2026 01:07:51 +0000 (0:00:00.688) 0:00:55.700 ******** 2026-04-04 01:11:41.218442 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.218450 | orchestrator | 2026-04-04 01:11:41.218459 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-04 01:11:41.218474 | orchestrator | Saturday 04 April 2026 01:07:57 +0000 (0:00:05.455) 0:01:01.156 ******** 2026-04-04 01:11:41.218512 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.218528 | orchestrator | 2026-04-04 01:11:41.218541 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-04 01:11:41.218682 | orchestrator | Saturday 04 April 2026 01:08:01 +0000 (0:00:04.770) 0:01:05.926 ******** 2026-04-04 01:11:41.218703 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.218716 | orchestrator | 2026-04-04 01:11:41.218725 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-04 01:11:41.218733 | orchestrator | Saturday 04 April 2026 01:08:05 +0000 (0:00:03.483) 0:01:09.410 ******** 2026-04-04 01:11:41.218741 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-04 01:11:41.218749 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-04 01:11:41.218757 | orchestrator | 2026-04-04 01:11:41.218765 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-04 01:11:41.218773 | orchestrator | Saturday 04 April 2026 01:08:14 +0000 (0:00:09.419) 0:01:18.829 ******** 2026-04-04 01:11:41.218781 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-04 01:11:41.218789 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-04 01:11:41.218799 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-04 01:11:41.218808 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-04 01:11:41.218816 | orchestrator | 2026-04-04 01:11:41.218824 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-04 01:11:41.218845 | orchestrator | Saturday 04 April 2026 01:08:32 +0000 (0:00:17.607) 0:01:36.437 ******** 2026-04-04 01:11:41.218853 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.218861 | orchestrator | 2026-04-04 01:11:41.218869 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-04 01:11:41.218877 | orchestrator | Saturday 04 April 2026 01:08:37 +0000 (0:00:04.948) 0:01:41.385 ******** 2026-04-04 01:11:41.218885 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.218893 | orchestrator | 2026-04-04 01:11:41.218901 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-04 01:11:41.218909 | orchestrator | Saturday 04 April 2026 01:08:42 +0000 (0:00:04.682) 0:01:46.068 ******** 2026-04-04 01:11:41.218917 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.218925 | orchestrator | 2026-04-04 01:11:41.218933 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-04 01:11:41.218941 | orchestrator | Saturday 04 April 2026 01:08:42 +0000 (0:00:00.451) 0:01:46.519 ******** 2026-04-04 01:11:41.218960 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.218968 | orchestrator | 2026-04-04 01:11:41.218983 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.218992 | orchestrator | Saturday 04 April 2026 01:08:46 +0000 (0:00:03.999) 0:01:50.518 ******** 2026-04-04 01:11:41.219000 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:11:41.219008 | orchestrator | 2026-04-04 01:11:41.219016 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-04 01:11:41.219024 | orchestrator | Saturday 04 April 2026 01:08:47 +0000 (0:00:00.764) 0:01:51.283 ******** 2026-04-04 01:11:41.219032 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219039 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219047 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219055 | orchestrator | 2026-04-04 01:11:41.219063 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-04 01:11:41.219071 | orchestrator | Saturday 04 April 2026 01:08:52 +0000 (0:00:05.522) 0:01:56.805 ******** 2026-04-04 01:11:41.219079 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219087 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219095 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219102 | orchestrator | 2026-04-04 01:11:41.219110 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-04 01:11:41.219118 | orchestrator | Saturday 04 April 2026 01:08:58 +0000 (0:00:05.253) 0:02:02.059 ******** 2026-04-04 01:11:41.219126 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219140 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219150 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219164 | orchestrator | 2026-04-04 01:11:41.219176 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-04 01:11:41.219212 | orchestrator | Saturday 04 April 2026 01:08:59 +0000 (0:00:01.062) 0:02:03.121 ******** 2026-04-04 01:11:41.219225 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:11:41.219236 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:11:41.219249 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219262 | orchestrator | 2026-04-04 01:11:41.219276 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-04 01:11:41.219290 | orchestrator | Saturday 04 April 2026 01:09:01 +0000 (0:00:01.858) 0:02:04.980 ******** 2026-04-04 01:11:41.219304 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219317 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219326 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219334 | orchestrator | 2026-04-04 01:11:41.219342 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-04 01:11:41.219350 | orchestrator | Saturday 04 April 2026 01:09:02 +0000 (0:00:01.223) 0:02:06.203 ******** 2026-04-04 01:11:41.219357 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219372 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219380 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219388 | orchestrator | 2026-04-04 01:11:41.219396 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-04 01:11:41.219404 | orchestrator | Saturday 04 April 2026 01:09:03 +0000 (0:00:01.244) 0:02:07.448 ******** 2026-04-04 01:11:41.219412 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219420 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219428 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219436 | orchestrator | 2026-04-04 01:11:41.219473 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-04 01:11:41.219507 | orchestrator | Saturday 04 April 2026 01:09:05 +0000 (0:00:02.410) 0:02:09.858 ******** 2026-04-04 01:11:41.219516 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.219523 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.219531 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.219539 | orchestrator | 2026-04-04 01:11:41.219547 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-04 01:11:41.219555 | orchestrator | Saturday 04 April 2026 01:09:07 +0000 (0:00:01.553) 0:02:11.411 ******** 2026-04-04 01:11:41.219563 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219571 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:11:41.219579 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:11:41.219587 | orchestrator | 2026-04-04 01:11:41.219595 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-04 01:11:41.219603 | orchestrator | Saturday 04 April 2026 01:09:08 +0000 (0:00:00.600) 0:02:12.012 ******** 2026-04-04 01:11:41.219611 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:11:41.219619 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:11:41.219627 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219635 | orchestrator | 2026-04-04 01:11:41.219642 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.219651 | orchestrator | Saturday 04 April 2026 01:09:10 +0000 (0:00:02.780) 0:02:14.792 ******** 2026-04-04 01:11:41.219659 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:11:41.219667 | orchestrator | 2026-04-04 01:11:41.219675 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-04 01:11:41.219683 | orchestrator | Saturday 04 April 2026 01:09:11 +0000 (0:00:00.697) 0:02:15.490 ******** 2026-04-04 01:11:41.219691 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219698 | orchestrator | 2026-04-04 01:11:41.219706 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-04 01:11:41.219714 | orchestrator | Saturday 04 April 2026 01:09:15 +0000 (0:00:04.295) 0:02:19.787 ******** 2026-04-04 01:11:41.219722 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219730 | orchestrator | 2026-04-04 01:11:41.219738 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-04 01:11:41.219746 | orchestrator | Saturday 04 April 2026 01:09:18 +0000 (0:00:03.131) 0:02:22.918 ******** 2026-04-04 01:11:41.219754 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-04 01:11:41.219762 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-04 01:11:41.219770 | orchestrator | 2026-04-04 01:11:41.219778 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-04 01:11:41.219786 | orchestrator | Saturday 04 April 2026 01:09:26 +0000 (0:00:07.413) 0:02:30.332 ******** 2026-04-04 01:11:41.219794 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219802 | orchestrator | 2026-04-04 01:11:41.219810 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-04 01:11:41.219818 | orchestrator | Saturday 04 April 2026 01:09:30 +0000 (0:00:03.690) 0:02:34.022 ******** 2026-04-04 01:11:41.219826 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:11:41.219834 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:11:41.219842 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:11:41.219859 | orchestrator | 2026-04-04 01:11:41.219868 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-04 01:11:41.219876 | orchestrator | Saturday 04 April 2026 01:09:30 +0000 (0:00:00.264) 0:02:34.287 ******** 2026-04-04 01:11:41.219891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.219926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.219936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.219945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.219954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.219969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.219981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.219991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220107 | orchestrator | 2026-04-04 01:11:41.220115 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-04 01:11:41.220124 | orchestrator | Saturday 04 April 2026 01:09:33 +0000 (0:00:02.741) 0:02:37.029 ******** 2026-04-04 01:11:41.220132 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.220140 | orchestrator | 2026-04-04 01:11:41.220148 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-04 01:11:41.220156 | orchestrator | Saturday 04 April 2026 01:09:33 +0000 (0:00:00.099) 0:02:37.128 ******** 2026-04-04 01:11:41.220163 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.220171 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.220179 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.220187 | orchestrator | 2026-04-04 01:11:41.220195 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-04 01:11:41.220203 | orchestrator | Saturday 04 April 2026 01:09:33 +0000 (0:00:00.202) 0:02:37.330 ******** 2026-04-04 01:11:41.220211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.220262 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.220289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.220345 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.220371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.220419 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.220427 | orchestrator | 2026-04-04 01:11:41.220435 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.220443 | orchestrator | Saturday 04 April 2026 01:09:33 +0000 (0:00:00.552) 0:02:37.883 ******** 2026-04-04 01:11:41.220451 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:11:41.220459 | orchestrator | 2026-04-04 01:11:41.220470 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-04 01:11:41.220506 | orchestrator | Saturday 04 April 2026 01:09:34 +0000 (0:00:00.582) 0:02:38.466 ******** 2026-04-04 01:11:41.220527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.220577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'contai2026-04-04 01:11:41 | INFO  | Task 0ce04f50-b8c8-4f3c-8f20-06826c286652 is in state SUCCESS 2026-04-04 01:11:41.220594 | orchestrator | 2026-04-04 01:11:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:41.220609 | orchestrator | ner_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.220631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.220639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.220653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.220661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.220690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.220800 | orchestrator | 2026-04-04 01:11:41.220809 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-04 01:11:41.220817 | orchestrator | Saturday 04 April 2026 01:09:39 +0000 (0:00:04.985) 0:02:43.452 ******** 2026-04-04 01:11:41.220825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.220894 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.220902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.220939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.220952 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.220979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.220989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.220997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.221026 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.221039 | orchestrator | 2026-04-04 01:11:41.221053 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-04 01:11:41.221067 | orchestrator | Saturday 04 April 2026 01:09:40 +0000 (0:00:00.685) 0:02:44.137 ******** 2026-04-04 01:11:41.221088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.221110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.221124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.221161 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.221179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.221205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.221231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.221275 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.221284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.221296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.221311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.221335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.221343 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.221351 | orchestrator | 2026-04-04 01:11:41.221360 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-04 01:11:41.221368 | orchestrator | Saturday 04 April 2026 01:09:41 +0000 (0:00:01.082) 0:02:45.219 ******** 2026-04-04 01:11:41.221376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221571 | orchestrator | 2026-04-04 01:11:41.221579 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-04 01:11:41.221587 | orchestrator | Saturday 04 April 2026 01:09:46 +0000 (0:00:05.390) 0:02:50.610 ******** 2026-04-04 01:11:41.221595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:11:41.221603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:11:41.221611 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:11:41.221619 | orchestrator | 2026-04-04 01:11:41.221627 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-04 01:11:41.221635 | orchestrator | Saturday 04 April 2026 01:09:48 +0000 (0:00:01.581) 0:02:52.192 ******** 2026-04-04 01:11:41.221649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.221686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.221726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.221851 | orchestrator | 2026-04-04 01:11:41.221859 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-04 01:11:41.221867 | orchestrator | Saturday 04 April 2026 01:10:05 +0000 (0:00:16.855) 0:03:09.047 ******** 2026-04-04 01:11:41.221875 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.221889 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.221897 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.221905 | orchestrator | 2026-04-04 01:11:41.221913 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-04 01:11:41.221920 | orchestrator | Saturday 04 April 2026 01:10:07 +0000 (0:00:01.929) 0:03:10.976 ******** 2026-04-04 01:11:41.221928 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.221936 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.221944 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.221952 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.221960 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.221968 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.221976 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.221983 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.221991 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.221999 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222007 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222045 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222056 | orchestrator | 2026-04-04 01:11:41.222064 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-04 01:11:41.222072 | orchestrator | Saturday 04 April 2026 01:10:12 +0000 (0:00:05.379) 0:03:16.356 ******** 2026-04-04 01:11:41.222080 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222092 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222100 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222108 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222116 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222124 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222132 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222140 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222147 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222155 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222163 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222171 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222179 | orchestrator | 2026-04-04 01:11:41.222187 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-04 01:11:41.222195 | orchestrator | Saturday 04 April 2026 01:10:17 +0000 (0:00:05.317) 0:03:21.673 ******** 2026-04-04 01:11:41.222203 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222210 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222218 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:11:41.222226 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222247 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222260 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:11:41.222274 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222287 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222298 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:11:41.222310 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222336 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222352 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:11:41.222365 | orchestrator | 2026-04-04 01:11:41.222379 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-04 01:11:41.222393 | orchestrator | Saturday 04 April 2026 01:10:23 +0000 (0:00:05.548) 0:03:27.221 ******** 2026-04-04 01:11:41.222403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.222412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.222425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:11:41.222434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.222449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.222464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:11:41.222472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:11:41.222611 | orchestrator | 2026-04-04 01:11:41.222619 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-04 01:11:41.222633 | orchestrator | Saturday 04 April 2026 01:10:27 +0000 (0:00:04.031) 0:03:31.253 ******** 2026-04-04 01:11:41.222652 | orchestrator | changed: [testbed-node-0] => { 2026-04-04 01:11:41.222667 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:11:41.222681 | orchestrator | } 2026-04-04 01:11:41.222694 | orchestrator | changed: [testbed-node-1] => { 2026-04-04 01:11:41.222707 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:11:41.222719 | orchestrator | } 2026-04-04 01:11:41.222732 | orchestrator | changed: [testbed-node-2] => { 2026-04-04 01:11:41.222745 | orchestrator |  "msg": "Notifying handlers" 2026-04-04 01:11:41.222759 | orchestrator | } 2026-04-04 01:11:41.222772 | orchestrator | 2026-04-04 01:11:41.222806 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-04 01:11:41.222822 | orchestrator | Saturday 04 April 2026 01:10:27 +0000 (0:00:00.562) 0:03:31.816 ******** 2026-04-04 01:11:41.222833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.222857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.222866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.222874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.222883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.222891 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.222903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.222916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.222930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.222939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.222947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.222955 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.222963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:11:41.222975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:11:41.222989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.223002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:11:41.223011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:11:41.223019 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.223027 | orchestrator | 2026-04-04 01:11:41.223035 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:11:41.223042 | orchestrator | Saturday 04 April 2026 01:10:28 +0000 (0:00:00.903) 0:03:32.719 ******** 2026-04-04 01:11:41.223050 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:11:41.223058 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:11:41.223066 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:11:41.223074 | orchestrator | 2026-04-04 01:11:41.223082 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-04 01:11:41.223090 | orchestrator | Saturday 04 April 2026 01:10:29 +0000 (0:00:00.303) 0:03:33.022 ******** 2026-04-04 01:11:41.223097 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223105 | orchestrator | 2026-04-04 01:11:41.223113 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-04 01:11:41.223121 | orchestrator | Saturday 04 April 2026 01:10:30 +0000 (0:00:01.796) 0:03:34.819 ******** 2026-04-04 01:11:41.223129 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223136 | orchestrator | 2026-04-04 01:11:41.223148 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-04 01:11:41.223166 | orchestrator | Saturday 04 April 2026 01:10:32 +0000 (0:00:01.741) 0:03:36.561 ******** 2026-04-04 01:11:41.223182 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223195 | orchestrator | 2026-04-04 01:11:41.223208 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-04 01:11:41.223220 | orchestrator | Saturday 04 April 2026 01:10:35 +0000 (0:00:02.468) 0:03:39.029 ******** 2026-04-04 01:11:41.223232 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223245 | orchestrator | 2026-04-04 01:11:41.223258 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-04 01:11:41.223271 | orchestrator | Saturday 04 April 2026 01:10:37 +0000 (0:00:02.027) 0:03:41.057 ******** 2026-04-04 01:11:41.223286 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223294 | orchestrator | 2026-04-04 01:11:41.223302 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:11:41.223310 | orchestrator | Saturday 04 April 2026 01:10:57 +0000 (0:00:20.819) 0:04:01.877 ******** 2026-04-04 01:11:41.223318 | orchestrator | 2026-04-04 01:11:41.223326 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:11:41.223334 | orchestrator | Saturday 04 April 2026 01:10:57 +0000 (0:00:00.066) 0:04:01.943 ******** 2026-04-04 01:11:41.223342 | orchestrator | 2026-04-04 01:11:41.223350 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:11:41.223358 | orchestrator | Saturday 04 April 2026 01:10:58 +0000 (0:00:00.072) 0:04:02.016 ******** 2026-04-04 01:11:41.223366 | orchestrator | 2026-04-04 01:11:41.223374 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-04 01:11:41.223381 | orchestrator | Saturday 04 April 2026 01:10:58 +0000 (0:00:00.063) 0:04:02.079 ******** 2026-04-04 01:11:41.223390 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223397 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.223410 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.223418 | orchestrator | 2026-04-04 01:11:41.223426 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-04 01:11:41.223434 | orchestrator | Saturday 04 April 2026 01:11:07 +0000 (0:00:09.150) 0:04:11.230 ******** 2026-04-04 01:11:41.223447 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223460 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.223474 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.223618 | orchestrator | 2026-04-04 01:11:41.223633 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-04 01:11:41.223647 | orchestrator | Saturday 04 April 2026 01:11:18 +0000 (0:00:11.340) 0:04:22.571 ******** 2026-04-04 01:11:41.223661 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.223671 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.223679 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223687 | orchestrator | 2026-04-04 01:11:41.223695 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-04 01:11:41.223703 | orchestrator | Saturday 04 April 2026 01:11:27 +0000 (0:00:08.704) 0:04:31.275 ******** 2026-04-04 01:11:41.223711 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223719 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.223727 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.223735 | orchestrator | 2026-04-04 01:11:41.223743 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-04 01:11:41.223750 | orchestrator | Saturday 04 April 2026 01:11:32 +0000 (0:00:05.140) 0:04:36.415 ******** 2026-04-04 01:11:41.223758 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:11:41.223766 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:11:41.223775 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:11:41.223782 | orchestrator | 2026-04-04 01:11:41.223802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:11:41.223812 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 01:11:41.223821 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:11:41.223851 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:11:41.223859 | orchestrator | 2026-04-04 01:11:41.223867 | orchestrator | 2026-04-04 01:11:41.223876 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:11:41.223884 | orchestrator | Saturday 04 April 2026 01:11:37 +0000 (0:00:05.320) 0:04:41.736 ******** 2026-04-04 01:11:41.223907 | orchestrator | =============================================================================== 2026-04-04 01:11:41.223915 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.82s 2026-04-04 01:11:41.223923 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.61s 2026-04-04 01:11:41.223931 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.43s 2026-04-04 01:11:41.223939 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.86s 2026-04-04 01:11:41.223947 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.34s 2026-04-04 01:11:41.223955 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.42s 2026-04-04 01:11:41.223963 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.26s 2026-04-04 01:11:41.223970 | orchestrator | octavia : Restart octavia-api container --------------------------------- 9.15s 2026-04-04 01:11:41.223978 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.70s 2026-04-04 01:11:41.223986 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 7.62s 2026-04-04 01:11:41.223994 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.41s 2026-04-04 01:11:41.224002 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 6.91s 2026-04-04 01:11:41.224009 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.55s 2026-04-04 01:11:41.224017 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.52s 2026-04-04 01:11:41.224025 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.46s 2026-04-04 01:11:41.224033 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.39s 2026-04-04 01:11:41.224041 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.38s 2026-04-04 01:11:41.224049 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.32s 2026-04-04 01:11:41.224057 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.32s 2026-04-04 01:11:41.224065 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.25s 2026-04-04 01:11:44.260820 | orchestrator | 2026-04-04 01:11:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:47.297999 | orchestrator | 2026-04-04 01:11:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:50.347634 | orchestrator | 2026-04-04 01:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:53.395065 | orchestrator | 2026-04-04 01:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:56.444671 | orchestrator | 2026-04-04 01:11:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:11:59.492857 | orchestrator | 2026-04-04 01:11:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:02.537972 | orchestrator | 2026-04-04 01:12:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:05.585744 | orchestrator | 2026-04-04 01:12:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:08.627902 | orchestrator | 2026-04-04 01:12:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:11.670359 | orchestrator | 2026-04-04 01:12:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:14.718425 | orchestrator | 2026-04-04 01:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:17.763554 | orchestrator | 2026-04-04 01:12:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:20.807771 | orchestrator | 2026-04-04 01:12:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:23.845634 | orchestrator | 2026-04-04 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:26.888277 | orchestrator | 2026-04-04 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:29.930972 | orchestrator | 2026-04-04 01:12:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:32.975952 | orchestrator | 2026-04-04 01:12:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:36.033617 | orchestrator | 2026-04-04 01:12:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:39.072790 | orchestrator | 2026-04-04 01:12:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:12:42.117600 | orchestrator | 2026-04-04 01:12:42.311910 | orchestrator | 2026-04-04 01:12:42.318236 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Apr 4 01:12:42 UTC 2026 2026-04-04 01:12:42.318330 | orchestrator | 2026-04-04 01:12:42.644130 | orchestrator | ok: Runtime: 0:32:03.278949 2026-04-04 01:12:42.901298 | 2026-04-04 01:12:42.901475 | TASK [Bootstrap services] 2026-04-04 01:12:43.688617 | orchestrator | 2026-04-04 01:12:43.688703 | orchestrator | # BOOTSTRAP 2026-04-04 01:12:43.688713 | orchestrator | 2026-04-04 01:12:43.688719 | orchestrator | + set -e 2026-04-04 01:12:43.688724 | orchestrator | + echo 2026-04-04 01:12:43.688729 | orchestrator | + echo '# BOOTSTRAP' 2026-04-04 01:12:43.688736 | orchestrator | + echo 2026-04-04 01:12:43.688759 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-04 01:12:43.697578 | orchestrator | + set -e 2026-04-04 01:12:43.697654 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-04 01:12:48.157444 | orchestrator | 2026-04-04 01:12:48 | INFO  | It takes a moment until task f3773930-ad2d-4709-8527-49c5378cc9f3 (flavor-manager) has been started and output is visible here. 2026-04-04 01:12:58.627847 | orchestrator | 2026-04-04 01:12:53 | INFO  | Flavor SCS-1L-1 created 2026-04-04 01:12:58.627912 | orchestrator | 2026-04-04 01:12:53 | INFO  | Flavor SCS-1L-1-5 created 2026-04-04 01:12:58.627920 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-2 created 2026-04-04 01:12:58.627925 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-2-5 created 2026-04-04 01:12:58.627931 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-4 created 2026-04-04 01:12:58.627936 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-4-10 created 2026-04-04 01:12:58.627941 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-8 created 2026-04-04 01:12:58.627947 | orchestrator | 2026-04-04 01:12:54 | INFO  | Flavor SCS-1V-8-20 created 2026-04-04 01:12:58.627955 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-4 created 2026-04-04 01:12:58.627960 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-4-10 created 2026-04-04 01:12:58.627965 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-8 created 2026-04-04 01:12:58.627970 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-8-20 created 2026-04-04 01:12:58.627976 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-16 created 2026-04-04 01:12:58.627981 | orchestrator | 2026-04-04 01:12:55 | INFO  | Flavor SCS-2V-16-50 created 2026-04-04 01:12:58.627986 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-8 created 2026-04-04 01:12:58.627991 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-8-20 created 2026-04-04 01:12:58.627996 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-16 created 2026-04-04 01:12:58.628001 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-16-50 created 2026-04-04 01:12:58.628006 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-32 created 2026-04-04 01:12:58.628012 | orchestrator | 2026-04-04 01:12:56 | INFO  | Flavor SCS-4V-32-100 created 2026-04-04 01:12:58.628017 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-8V-16 created 2026-04-04 01:12:58.628022 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-8V-16-50 created 2026-04-04 01:12:58.628027 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-8V-32 created 2026-04-04 01:12:58.628032 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-8V-32-100 created 2026-04-04 01:12:58.628037 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-16V-32 created 2026-04-04 01:12:58.628042 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-16V-32-100 created 2026-04-04 01:12:58.628047 | orchestrator | 2026-04-04 01:12:57 | INFO  | Flavor SCS-2V-4-20s created 2026-04-04 01:12:58.628052 | orchestrator | 2026-04-04 01:12:58 | INFO  | Flavor SCS-4V-8-50s created 2026-04-04 01:12:58.628057 | orchestrator | 2026-04-04 01:12:58 | INFO  | Flavor SCS-4V-16-100s created 2026-04-04 01:12:58.628063 | orchestrator | 2026-04-04 01:12:58 | INFO  | Flavor SCS-8V-32-100s created 2026-04-04 01:13:00.144139 | orchestrator | 2026-04-04 01:13:00 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-04 01:13:10.322545 | orchestrator | 2026-04-04 01:13:10 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-04 01:13:10.397122 | orchestrator | 2026-04-04 01:13:10 | INFO  | Task 5678b6ee-b2ab-48cd-91fa-24ce7fb9ce07 (bootstrap-basic) was prepared for execution. 2026-04-04 01:13:10.397201 | orchestrator | 2026-04-04 01:13:10 | INFO  | It takes a moment until task 5678b6ee-b2ab-48cd-91fa-24ce7fb9ce07 (bootstrap-basic) has been started and output is visible here. 2026-04-04 01:13:56.189814 | orchestrator | 2026-04-04 01:13:56.189901 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-04 01:13:56.189909 | orchestrator | 2026-04-04 01:13:56.189913 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 01:13:56.189918 | orchestrator | Saturday 04 April 2026 01:13:13 +0000 (0:00:00.120) 0:00:00.120 ******** 2026-04-04 01:13:56.189923 | orchestrator | ok: [localhost] 2026-04-04 01:13:56.189927 | orchestrator | 2026-04-04 01:13:56.189931 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-04 01:13:56.189936 | orchestrator | Saturday 04 April 2026 01:13:15 +0000 (0:00:02.053) 0:00:02.173 ******** 2026-04-04 01:13:56.189941 | orchestrator | ok: [localhost] 2026-04-04 01:13:56.189945 | orchestrator | 2026-04-04 01:13:56.189949 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-04 01:13:56.189953 | orchestrator | Saturday 04 April 2026 01:13:25 +0000 (0:00:10.013) 0:00:12.187 ******** 2026-04-04 01:13:56.189957 | orchestrator | changed: [localhost] 2026-04-04 01:13:56.189962 | orchestrator | 2026-04-04 01:13:56.189966 | orchestrator | TASK [Create public network] *************************************************** 2026-04-04 01:13:56.189970 | orchestrator | Saturday 04 April 2026 01:13:33 +0000 (0:00:07.598) 0:00:19.786 ******** 2026-04-04 01:13:56.189974 | orchestrator | changed: [localhost] 2026-04-04 01:13:56.189978 | orchestrator | 2026-04-04 01:13:56.189984 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-04 01:13:56.189989 | orchestrator | Saturday 04 April 2026 01:13:38 +0000 (0:00:04.994) 0:00:24.780 ******** 2026-04-04 01:13:56.189993 | orchestrator | changed: [localhost] 2026-04-04 01:13:56.189997 | orchestrator | 2026-04-04 01:13:56.190001 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-04 01:13:56.190005 | orchestrator | Saturday 04 April 2026 01:13:44 +0000 (0:00:06.271) 0:00:31.052 ******** 2026-04-04 01:13:56.190008 | orchestrator | changed: [localhost] 2026-04-04 01:13:56.190060 | orchestrator | 2026-04-04 01:13:56.190066 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-04 01:13:56.190069 | orchestrator | Saturday 04 April 2026 01:13:48 +0000 (0:00:04.174) 0:00:35.226 ******** 2026-04-04 01:13:56.190073 | orchestrator | changed: [localhost] 2026-04-04 01:13:56.190077 | orchestrator | 2026-04-04 01:13:56.190081 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-04 01:13:56.190093 | orchestrator | Saturday 04 April 2026 01:13:52 +0000 (0:00:03.905) 0:00:39.131 ******** 2026-04-04 01:13:56.190097 | orchestrator | ok: [localhost] 2026-04-04 01:13:56.190101 | orchestrator | 2026-04-04 01:13:56.190105 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:13:56.190109 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:13:56.190114 | orchestrator | 2026-04-04 01:13:56.190118 | orchestrator | 2026-04-04 01:13:56.190122 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:13:56.190126 | orchestrator | Saturday 04 April 2026 01:13:56 +0000 (0:00:03.432) 0:00:42.564 ******** 2026-04-04 01:13:56.190130 | orchestrator | =============================================================================== 2026-04-04 01:13:56.190134 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.01s 2026-04-04 01:13:56.190158 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.60s 2026-04-04 01:13:56.190164 | orchestrator | Set public network to default ------------------------------------------- 6.27s 2026-04-04 01:13:56.190170 | orchestrator | Create public network --------------------------------------------------- 4.99s 2026-04-04 01:13:56.190176 | orchestrator | Create public subnet ---------------------------------------------------- 4.17s 2026-04-04 01:13:56.190183 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.91s 2026-04-04 01:13:56.190188 | orchestrator | Create manager role ----------------------------------------------------- 3.43s 2026-04-04 01:13:56.190194 | orchestrator | Gathering Facts --------------------------------------------------------- 2.05s 2026-04-04 01:13:57.867654 | orchestrator | 2026-04-04 01:13:57 | INFO  | It takes a moment until task ef3961d3-0067-4282-b2c1-6152b415cd04 (image-manager) has been started and output is visible here. 2026-04-04 01:14:36.396006 | orchestrator | 2026-04-04 01:14:00 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-04 01:14:36.396090 | orchestrator | 2026-04-04 01:14:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-04 01:14:36.396098 | orchestrator | 2026-04-04 01:14:00 | INFO  | Importing image Cirros 0.6.2 2026-04-04 01:14:36.396103 | orchestrator | 2026-04-04 01:14:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-04 01:14:36.396108 | orchestrator | 2026-04-04 01:14:02 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:14:36.396113 | orchestrator | 2026-04-04 01:14:05 | INFO  | Waiting for import to complete... 2026-04-04 01:14:36.396118 | orchestrator | 2026-04-04 01:14:15 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-04 01:14:36.396123 | orchestrator | 2026-04-04 01:14:15 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-04 01:14:36.396126 | orchestrator | 2026-04-04 01:14:15 | INFO  | Setting internal_version = 0.6.2 2026-04-04 01:14:36.396131 | orchestrator | 2026-04-04 01:14:15 | INFO  | Setting image_original_user = cirros 2026-04-04 01:14:36.396136 | orchestrator | 2026-04-04 01:14:15 | INFO  | Adding tag os:cirros 2026-04-04 01:14:36.396140 | orchestrator | 2026-04-04 01:14:15 | INFO  | Setting property architecture: x86_64 2026-04-04 01:14:36.396144 | orchestrator | 2026-04-04 01:14:15 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:14:36.396148 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:14:36.396152 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:14:36.396156 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:14:36.396160 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:14:36.396170 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property os_distro: cirros 2026-04-04 01:14:36.396176 | orchestrator | 2026-04-04 01:14:16 | INFO  | Setting property os_purpose: minimal 2026-04-04 01:14:36.396182 | orchestrator | 2026-04-04 01:14:17 | INFO  | Setting property replace_frequency: never 2026-04-04 01:14:36.396188 | orchestrator | 2026-04-04 01:14:17 | INFO  | Setting property uuid_validity: none 2026-04-04 01:14:36.396194 | orchestrator | 2026-04-04 01:14:17 | INFO  | Setting property provided_until: none 2026-04-04 01:14:36.396199 | orchestrator | 2026-04-04 01:14:17 | INFO  | Setting property image_description: Cirros 2026-04-04 01:14:36.396205 | orchestrator | 2026-04-04 01:14:17 | INFO  | Setting property image_name: Cirros 2026-04-04 01:14:36.396231 | orchestrator | 2026-04-04 01:14:18 | INFO  | Setting property internal_version: 0.6.2 2026-04-04 01:14:36.396238 | orchestrator | 2026-04-04 01:14:18 | INFO  | Setting property image_original_user: cirros 2026-04-04 01:14:36.396243 | orchestrator | 2026-04-04 01:14:18 | INFO  | Setting property os_version: 0.6.2 2026-04-04 01:14:36.396250 | orchestrator | 2026-04-04 01:14:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-04 01:14:36.396258 | orchestrator | 2026-04-04 01:14:18 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-04 01:14:36.396264 | orchestrator | 2026-04-04 01:14:19 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-04 01:14:36.396270 | orchestrator | 2026-04-04 01:14:19 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-04 01:14:36.396279 | orchestrator | 2026-04-04 01:14:19 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-04 01:14:36.396285 | orchestrator | 2026-04-04 01:14:19 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-04 01:14:36.396291 | orchestrator | 2026-04-04 01:14:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-04 01:14:36.396298 | orchestrator | 2026-04-04 01:14:19 | INFO  | Importing image Cirros 0.6.3 2026-04-04 01:14:36.396312 | orchestrator | 2026-04-04 01:14:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-04 01:14:36.396320 | orchestrator | 2026-04-04 01:14:19 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:14:36.396325 | orchestrator | 2026-04-04 01:14:21 | INFO  | Waiting for import to complete... 2026-04-04 01:14:36.396340 | orchestrator | 2026-04-04 01:14:31 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-04 01:14:36.396345 | orchestrator | 2026-04-04 01:14:32 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-04 01:14:36.396349 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting internal_version = 0.6.3 2026-04-04 01:14:36.396352 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting image_original_user = cirros 2026-04-04 01:14:36.396356 | orchestrator | 2026-04-04 01:14:32 | INFO  | Adding tag os:cirros 2026-04-04 01:14:36.396360 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting property architecture: x86_64 2026-04-04 01:14:36.396364 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:14:36.396368 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:14:36.396372 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:14:36.396376 | orchestrator | 2026-04-04 01:14:32 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:14:36.396380 | orchestrator | 2026-04-04 01:14:33 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:14:36.396384 | orchestrator | 2026-04-04 01:14:33 | INFO  | Setting property os_distro: cirros 2026-04-04 01:14:36.396387 | orchestrator | 2026-04-04 01:14:33 | INFO  | Setting property os_purpose: minimal 2026-04-04 01:14:36.396391 | orchestrator | 2026-04-04 01:14:33 | INFO  | Setting property replace_frequency: never 2026-04-04 01:14:36.396395 | orchestrator | 2026-04-04 01:14:33 | INFO  | Setting property uuid_validity: none 2026-04-04 01:14:36.396399 | orchestrator | 2026-04-04 01:14:34 | INFO  | Setting property provided_until: none 2026-04-04 01:14:36.396403 | orchestrator | 2026-04-04 01:14:34 | INFO  | Setting property image_description: Cirros 2026-04-04 01:14:36.396412 | orchestrator | 2026-04-04 01:14:34 | INFO  | Setting property image_name: Cirros 2026-04-04 01:14:36.396416 | orchestrator | 2026-04-04 01:14:34 | INFO  | Setting property internal_version: 0.6.3 2026-04-04 01:14:36.396420 | orchestrator | 2026-04-04 01:14:34 | INFO  | Setting property image_original_user: cirros 2026-04-04 01:14:36.396424 | orchestrator | 2026-04-04 01:14:35 | INFO  | Setting property os_version: 0.6.3 2026-04-04 01:14:36.396428 | orchestrator | 2026-04-04 01:14:35 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-04 01:14:36.396431 | orchestrator | 2026-04-04 01:14:35 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-04 01:14:36.396435 | orchestrator | 2026-04-04 01:14:35 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-04 01:14:36.396439 | orchestrator | 2026-04-04 01:14:35 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-04 01:14:36.396443 | orchestrator | 2026-04-04 01:14:35 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-04 01:14:36.644456 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-04 01:14:38.602732 | orchestrator | 2026-04-04 01:14:38 | INFO  | date: 2026-04-03 2026-04-04 01:14:38.602780 | orchestrator | 2026-04-04 01:14:38 | INFO  | image: octavia-amphora-haproxy-2025.1.20260403.qcow2 2026-04-04 01:14:38.603465 | orchestrator | 2026-04-04 01:14:38 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260403.qcow2 2026-04-04 01:14:38.603799 | orchestrator | 2026-04-04 01:14:38 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260403.qcow2.CHECKSUM 2026-04-04 01:14:38.826621 | orchestrator | 2026-04-04 01:14:38 | INFO  | checksum: c1a914cd8efdc43694e46d77d61b84dfa16abfffec6ec162387a1e3af1866588 2026-04-04 01:14:38.912626 | orchestrator | 2026-04-04 01:14:38 | INFO  | It takes a moment until task 52f995cf-8490-4456-9500-29592540bf8d (image-manager) has been started and output is visible here. 2026-04-04 01:15:39.582545 | orchestrator | 2026-04-04 01:14:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:15:39.582740 | orchestrator | 2026-04-04 01:14:41 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260403.qcow2: 200 2026-04-04 01:15:39.582758 | orchestrator | 2026-04-04 01:14:41 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-03 2026-04-04 01:15:39.582766 | orchestrator | 2026-04-04 01:14:41 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260403.qcow2 2026-04-04 01:15:39.582775 | orchestrator | 2026-04-04 01:14:42 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:15:39.582783 | orchestrator | 2026-04-04 01:14:44 | INFO  | Waiting for import to complete... 2026-04-04 01:15:39.582789 | orchestrator | 2026-04-04 01:14:54 | INFO  | Waiting for import to complete... 2026-04-04 01:15:39.582796 | orchestrator | 2026-04-04 01:15:04 | INFO  | Waiting for import to complete... 2026-04-04 01:15:39.582803 | orchestrator | 2026-04-04 01:15:15 | INFO  | Waiting for import to complete... 2026-04-04 01:15:39.582812 | orchestrator | 2026-04-04 01:15:25 | INFO  | Waiting for import to complete... 2026-04-04 01:15:39.582819 | orchestrator | 2026-04-04 01:15:35 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-03' successfully completed, reloading images 2026-04-04 01:15:39.582853 | orchestrator | 2026-04-04 01:15:35 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:15:39.582860 | orchestrator | 2026-04-04 01:15:35 | INFO  | Setting internal_version = 2026-04-03 2026-04-04 01:15:39.582866 | orchestrator | 2026-04-04 01:15:35 | INFO  | Setting image_original_user = ubuntu 2026-04-04 01:15:39.582874 | orchestrator | 2026-04-04 01:15:35 | INFO  | Adding tag amphora 2026-04-04 01:15:39.582881 | orchestrator | 2026-04-04 01:15:35 | INFO  | Adding tag os:ubuntu 2026-04-04 01:15:39.582888 | orchestrator | 2026-04-04 01:15:35 | INFO  | Setting property architecture: x86_64 2026-04-04 01:15:39.582895 | orchestrator | 2026-04-04 01:15:35 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:15:39.582901 | orchestrator | 2026-04-04 01:15:36 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:15:39.582909 | orchestrator | 2026-04-04 01:15:36 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:15:39.582914 | orchestrator | 2026-04-04 01:15:36 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:15:39.582921 | orchestrator | 2026-04-04 01:15:36 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:15:39.582927 | orchestrator | 2026-04-04 01:15:36 | INFO  | Setting property os_distro: ubuntu 2026-04-04 01:15:39.582933 | orchestrator | 2026-04-04 01:15:37 | INFO  | Setting property replace_frequency: quarterly 2026-04-04 01:15:39.582937 | orchestrator | 2026-04-04 01:15:37 | INFO  | Setting property uuid_validity: last-1 2026-04-04 01:15:39.582941 | orchestrator | 2026-04-04 01:15:37 | INFO  | Setting property provided_until: none 2026-04-04 01:15:39.582945 | orchestrator | 2026-04-04 01:15:37 | INFO  | Setting property os_purpose: network 2026-04-04 01:15:39.582950 | orchestrator | 2026-04-04 01:15:37 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-04 01:15:39.582966 | orchestrator | 2026-04-04 01:15:38 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-04 01:15:39.582970 | orchestrator | 2026-04-04 01:15:38 | INFO  | Setting property internal_version: 2026-04-03 2026-04-04 01:15:39.582974 | orchestrator | 2026-04-04 01:15:38 | INFO  | Setting property image_original_user: ubuntu 2026-04-04 01:15:39.582978 | orchestrator | 2026-04-04 01:15:38 | INFO  | Setting property os_version: 2026-04-03 2026-04-04 01:15:39.582982 | orchestrator | 2026-04-04 01:15:38 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260403.qcow2 2026-04-04 01:15:39.582986 | orchestrator | 2026-04-04 01:15:39 | INFO  | Setting property image_build_date: 2026-04-03 2026-04-04 01:15:39.582990 | orchestrator | 2026-04-04 01:15:39 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:15:39.582994 | orchestrator | 2026-04-04 01:15:39 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:15:39.582998 | orchestrator | 2026-04-04 01:15:39 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-04 01:15:39.583016 | orchestrator | 2026-04-04 01:15:39 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-04 01:15:39.583022 | orchestrator | 2026-04-04 01:15:39 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-04 01:15:39.583027 | orchestrator | 2026-04-04 01:15:39 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-04 01:15:40.099133 | orchestrator | ok: Runtime: 0:02:56.550795 2026-04-04 01:15:40.122741 | 2026-04-04 01:15:40.122922 | TASK [Run checks] 2026-04-04 01:15:41.007848 | orchestrator | + set -e 2026-04-04 01:15:41.008031 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:15:41.008044 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:15:41.008058 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:15:41.008069 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:15:41.008077 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:15:41.008086 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:15:41.008980 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:15:41.015910 | orchestrator | 2026-04-04 01:15:41.016053 | orchestrator | # CHECK 2026-04-04 01:15:41.016067 | orchestrator | 2026-04-04 01:15:41.016075 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:15:41.016089 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:15:41.016096 | orchestrator | + echo 2026-04-04 01:15:41.016102 | orchestrator | + echo '# CHECK' 2026-04-04 01:15:41.016109 | orchestrator | + echo 2026-04-04 01:15:41.016124 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:15:41.016604 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:15:41.066651 | orchestrator | 2026-04-04 01:15:41.066781 | orchestrator | ## Containers @ testbed-manager 2026-04-04 01:15:41.066797 | orchestrator | 2026-04-04 01:15:41.066808 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:15:41.066815 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:15:41.066822 | orchestrator | + echo 2026-04-04 01:15:41.066830 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-04 01:15:41.066839 | orchestrator | + echo 2026-04-04 01:15:41.066845 | orchestrator | + osism container testbed-manager ps 2026-04-04 01:15:42.094262 | orchestrator | 2026-04-04 01:15:42 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-04 01:15:42.480783 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:15:42.480899 | orchestrator | 101ffbf7f45a registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2026-04-04 01:15:42.480924 | orchestrator | c589e6a92b0a registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-04-04 01:15:42.480936 | orchestrator | 7ba470a6ad64 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-04 01:15:42.480944 | orchestrator | f343b0e752f1 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-04 01:15:42.480956 | orchestrator | 024cbbbb036e registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2026-04-04 01:15:42.480963 | orchestrator | 4cb9f7995d1e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-04-04 01:15:42.480971 | orchestrator | 3697b8800eb1 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:15:42.480978 | orchestrator | e1396596d017 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:15:42.481006 | orchestrator | 2ef6247904b0 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:15:42.481014 | orchestrator | 221641f52b01 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-04 01:15:42.481021 | orchestrator | caf2581dbeef registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2026-04-04 01:15:42.481028 | orchestrator | f43e957ec3be registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-04 01:15:42.481035 | orchestrator | ac46fcf6f798 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-04 01:15:42.481042 | orchestrator | 2118476dbe63 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-04 01:15:42.481049 | orchestrator | 458e8998251e registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-04-04 01:15:42.481072 | orchestrator | 43ebefb380b9 registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-04-04 01:15:42.481084 | orchestrator | dd968af9a973 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-ansible 2026-04-04 01:15:42.481092 | orchestrator | bf6e7a0af715 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-04-04 01:15:42.481099 | orchestrator | 8b8901fb48da registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-04 01:15:42.481105 | orchestrator | 2486c04e8080 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-04 01:15:42.481112 | orchestrator | 3b4b0aca46fd registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-04 01:15:42.481119 | orchestrator | bd663c78d06c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-04 01:15:42.481126 | orchestrator | cc87728ffda2 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-04 01:15:42.481139 | orchestrator | b0be954e95a5 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-04 01:15:42.481147 | orchestrator | e8e2867ac694 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-04 01:15:42.481153 | orchestrator | 7fa0dbfd2c6c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-04 01:15:42.481160 | orchestrator | bcc51ee76bff registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-04 01:15:42.481167 | orchestrator | 7abd8bf4da92 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-04 01:15:42.481174 | orchestrator | 960dcce88e7c registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-04 01:15:42.618375 | orchestrator | 2026-04-04 01:15:42.618474 | orchestrator | ## Images @ testbed-manager 2026-04-04 01:15:42.618486 | orchestrator | 2026-04-04 01:15:42.618493 | orchestrator | + echo 2026-04-04 01:15:42.618501 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-04 01:15:42.618509 | orchestrator | + echo 2026-04-04 01:15:42.618522 | orchestrator | + osism container testbed-manager images 2026-04-04 01:15:44.018109 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:15:44.018162 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 0457a19ad7da About an hour ago 635MB 2026-04-04 01:15:44.018167 | orchestrator | registry.osism.tech/osism/osism-ansible latest 0df404b6426d About an hour ago 638MB 2026-04-04 01:15:44.018172 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 4bedf21e21b2 About an hour ago 585MB 2026-04-04 01:15:44.018176 | orchestrator | registry.osism.tech/osism/osism latest 2093f32a3ff3 About an hour ago 407MB 2026-04-04 01:15:44.018191 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 1e1660ae2c68 About an hour ago 1.24GB 2026-04-04 01:15:44.018195 | orchestrator | registry.osism.tech/osism/osism-frontend latest 0be85bab30fc About an hour ago 212MB 2026-04-04 01:15:44.018199 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 86eb43df08a0 About an hour ago 357MB 2026-04-04 01:15:44.018203 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 cb6f0f54a79e 21 hours ago 213MB 2026-04-04 01:15:44.018207 | orchestrator | registry.osism.tech/osism/cephclient reef 6cbb9cfaba46 21 hours ago 453MB 2026-04-04 01:15:44.018210 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 4 days ago 277MB 2026-04-04 01:15:44.018214 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 4 days ago 683MB 2026-04-04 01:15:44.018218 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 f3b5dcd199ab 4 days ago 319MB 2026-04-04 01:15:44.018222 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 4 days ago 317MB 2026-04-04 01:15:44.018225 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 f429f961b947 4 days ago 415MB 2026-04-04 01:15:44.018239 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 1ac263a9ab9a 4 days ago 860MB 2026-04-04 01:15:44.018243 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 4 days ago 368MB 2026-04-04 01:15:44.018247 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 5 days ago 590MB 2026-04-04 01:15:44.018251 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-04 01:15:44.018255 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-04 01:15:44.018258 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-04 01:15:44.018262 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-04 01:15:44.018266 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-04 01:15:44.018270 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-04 01:15:44.018274 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-04 01:15:44.144633 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:15:44.144974 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:15:44.191210 | orchestrator | 2026-04-04 01:15:44.191265 | orchestrator | ## Containers @ testbed-node-0 2026-04-04 01:15:44.191271 | orchestrator | 2026-04-04 01:15:44.191275 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:15:44.191279 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:15:44.191284 | orchestrator | + echo 2026-04-04 01:15:44.191288 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-04 01:15:44.191292 | orchestrator | + echo 2026-04-04 01:15:44.191298 | orchestrator | + osism container testbed-node-0 ps 2026-04-04 01:15:45.602468 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:15:45.603069 | orchestrator | 867eff61682b registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:15:45.603103 | orchestrator | 202943bf05cc registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:15:45.603112 | orchestrator | 254b1a06542d registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:15:45.603120 | orchestrator | 89bbace5bc10 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:15:45.603127 | orchestrator | d1121e15a7db registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:15:45.603134 | orchestrator | 43e44f84aa12 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-04 01:15:45.603149 | orchestrator | be4c202a70e3 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-04 01:15:45.603154 | orchestrator | e96907acdcf5 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-04 01:15:45.603159 | orchestrator | 0a3a12102b06 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:15:45.603173 | orchestrator | a7eae8642cec registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-04 01:15:45.603177 | orchestrator | 2dfc2961b35a registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-04 01:15:45.603182 | orchestrator | b0cd154080ee registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:15:45.603186 | orchestrator | 3e1cc11ab666 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:15:45.603191 | orchestrator | 6034a3b8b173 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:15:45.603195 | orchestrator | 18b068237ef2 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:15:45.603200 | orchestrator | a6d830290ca7 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:15:45.603204 | orchestrator | 1ea293e56068 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-04 01:15:45.603208 | orchestrator | 39bbfab7cf73 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:15:45.603213 | orchestrator | 7c99bbcab4da registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-04 01:15:45.603218 | orchestrator | 33d716373ba8 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-04 01:15:45.603222 | orchestrator | 5e30a22dd32b registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-04 01:15:45.603238 | orchestrator | e4692049229c registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:15:45.603243 | orchestrator | c210a92147a8 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:15:45.603247 | orchestrator | 67bb6aab9ddf registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:15:45.603252 | orchestrator | 9161d04253ae registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:15:45.603258 | orchestrator | 272191916bc5 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-04 01:15:45.603263 | orchestrator | 98993d0cf551 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:15:45.603270 | orchestrator | 25f8d4506f89 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:15:45.603275 | orchestrator | e7f071e8e004 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:15:45.603283 | orchestrator | 653540d667f2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-04 01:15:45.603288 | orchestrator | 647c8b84f759 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-04 01:15:45.603293 | orchestrator | 44d5d0dd09a8 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-04 01:15:45.603298 | orchestrator | 8780f4c67136 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-04 01:15:45.603302 | orchestrator | 8ef441e5b0ca registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-04 01:15:45.603307 | orchestrator | 19ab71531c50 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-04-04 01:15:45.603311 | orchestrator | 1b83e50f28d2 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:15:45.603316 | orchestrator | 0c7b58c0700f registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:15:45.603320 | orchestrator | 1ab0ab422332 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:15:45.603325 | orchestrator | 826c6954a2a5 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-04 01:15:45.603329 | orchestrator | 425cb4705df5 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-04-04 01:15:45.603334 | orchestrator | ebcb3d7c09b3 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-04 01:15:45.603339 | orchestrator | 60048ebe6eeb registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-04 01:15:45.603343 | orchestrator | 7d618bd33157 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:15:45.603348 | orchestrator | ce7b18175844 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:15:45.603357 | orchestrator | 80cf88e2e533 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:15:45.603362 | orchestrator | e14f3b86a9f6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-04 01:15:45.603367 | orchestrator | bdad3f936c36 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes ovn_northd 2026-04-04 01:15:45.603371 | orchestrator | 6698ca581aad registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-04 01:15:45.603382 | orchestrator | 8dddb4f53362 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-04 01:15:45.603387 | orchestrator | 90e2038b3f7e registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-04 01:15:45.603391 | orchestrator | 8be9a639167a registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:15:45.603396 | orchestrator | 52febfea2aff registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-04 01:15:45.603400 | orchestrator | 0c05bb9c66ef registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-04 01:15:45.603407 | orchestrator | 78a45b34ec80 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:15:45.603411 | orchestrator | 41580809d3b0 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-04-04 01:15:45.603415 | orchestrator | a30004882885 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-04-04 01:15:45.603419 | orchestrator | 2b9cbcaceb09 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-04 01:15:45.603422 | orchestrator | ecd3a8287482 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-04 01:15:45.603426 | orchestrator | 7f9bc3893226 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:15:45.603430 | orchestrator | fcb97d21408c registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-04 01:15:45.603434 | orchestrator | d2f892607c7a registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-04 01:15:45.747423 | orchestrator | 2026-04-04 01:15:45.747478 | orchestrator | ## Images @ testbed-node-0 2026-04-04 01:15:45.747484 | orchestrator | 2026-04-04 01:15:45.747489 | orchestrator | + echo 2026-04-04 01:15:45.747494 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-04 01:15:45.747499 | orchestrator | + echo 2026-04-04 01:15:45.747503 | orchestrator | + osism container testbed-node-0 images 2026-04-04 01:15:47.206916 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:15:47.206981 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:15:47.206990 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 4 days ago 277MB 2026-04-04 01:15:47.206997 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 4 days ago 427MB 2026-04-04 01:15:47.207003 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 4 days ago 288MB 2026-04-04 01:15:47.207011 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 4 days ago 277MB 2026-04-04 01:15:47.207018 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 4 days ago 683MB 2026-04-04 01:15:47.207024 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 4 days ago 350MB 2026-04-04 01:15:47.207043 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 4 days ago 285MB 2026-04-04 01:15:47.207050 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 4 days ago 284MB 2026-04-04 01:15:47.207056 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 4 days ago 284MB 2026-04-04 01:15:47.207062 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 4 days ago 463MB 2026-04-04 01:15:47.207077 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 4 days ago 293MB 2026-04-04 01:15:47.207084 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 4 days ago 293MB 2026-04-04 01:15:47.207090 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 4 days ago 309MB 2026-04-04 01:15:47.207097 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 4 days ago 317MB 2026-04-04 01:15:47.207103 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 4 days ago 312MB 2026-04-04 01:15:47.207110 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 4 days ago 368MB 2026-04-04 01:15:47.207116 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 4 days ago 303MB 2026-04-04 01:15:47.207123 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 4 days ago 1.2GB 2026-04-04 01:15:47.207129 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 4 days ago 301MB 2026-04-04 01:15:47.207136 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 4 days ago 301MB 2026-04-04 01:15:47.207142 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 4 days ago 301MB 2026-04-04 01:15:47.207148 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 4 days ago 301MB 2026-04-04 01:15:47.207155 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 094e864fa4b6 4 days ago 995MB 2026-04-04 01:15:47.207161 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 d5ddbea139ad 4 days ago 994MB 2026-04-04 01:15:47.207168 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 419c6f4acdd0 4 days ago 995MB 2026-04-04 01:15:47.207175 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 b130f227014d 4 days ago 995MB 2026-04-04 01:15:47.207190 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 4 days ago 996MB 2026-04-04 01:15:47.207197 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 4 days ago 1.12GB 2026-04-04 01:15:47.207204 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 4 days ago 1.23GB 2026-04-04 01:15:47.207210 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 4 days ago 1.39GB 2026-04-04 01:15:47.207217 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 4 days ago 1.23GB 2026-04-04 01:15:47.207223 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 4 days ago 1.23GB 2026-04-04 01:15:47.207230 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 4 days ago 1.05GB 2026-04-04 01:15:47.207236 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 4 days ago 1.05GB 2026-04-04 01:15:47.207242 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 4 days ago 1.07GB 2026-04-04 01:15:47.207249 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 4 days ago 1.05GB 2026-04-04 01:15:47.207260 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 4 days ago 1.07GB 2026-04-04 01:15:47.207267 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 4 days ago 1.43GB 2026-04-04 01:15:47.207276 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 4 days ago 1.43GB 2026-04-04 01:15:47.207283 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 4 days ago 1.79GB 2026-04-04 01:15:47.207290 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 4 days ago 1.44GB 2026-04-04 01:15:47.207296 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 778a5c2a7676 4 days ago 1.07GB 2026-04-04 01:15:47.207302 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 06549fefcbea 4 days ago 1.02GB 2026-04-04 01:15:47.207309 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 8dad295e99c2 4 days ago 997MB 2026-04-04 01:15:47.207316 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 e6e7fe48c025 4 days ago 996MB 2026-04-04 01:15:47.207322 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 4 days ago 1.06GB 2026-04-04 01:15:47.207328 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 4 days ago 1.05GB 2026-04-04 01:15:47.207335 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 4 days ago 1.09GB 2026-04-04 01:15:47.207341 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 4 days ago 1.27GB 2026-04-04 01:15:47.207348 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 4 days ago 1.15GB 2026-04-04 01:15:47.207354 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 4 days ago 1.01GB 2026-04-04 01:15:47.207361 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 4 days ago 1GB 2026-04-04 01:15:47.207367 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 4 days ago 1GB 2026-04-04 01:15:47.207374 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 4 days ago 1GB 2026-04-04 01:15:47.207380 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 4 days ago 1.01GB 2026-04-04 01:15:47.207387 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 4 days ago 1GB 2026-04-04 01:15:47.207393 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 4 days ago 1.24GB 2026-04-04 01:15:47.207400 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 4 days ago 1GB 2026-04-04 01:15:47.207406 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 4 days ago 1GB 2026-04-04 01:15:47.207413 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 4 days ago 1GB 2026-04-04 01:15:47.207419 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 4 days ago 301MB 2026-04-04 01:15:47.207426 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 5 days ago 1.54GB 2026-04-04 01:15:47.207432 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 5 days ago 1.57GB 2026-04-04 01:15:47.207439 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 5 days ago 590MB 2026-04-04 01:15:47.207450 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 5 days ago 1.04GB 2026-04-04 01:15:47.344495 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:15:47.345062 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:15:47.388552 | orchestrator | 2026-04-04 01:15:47.388652 | orchestrator | ## Containers @ testbed-node-1 2026-04-04 01:15:47.388664 | orchestrator | 2026-04-04 01:15:47.388671 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:15:47.388676 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:15:47.388680 | orchestrator | + echo 2026-04-04 01:15:47.388684 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-04 01:15:47.388689 | orchestrator | + echo 2026-04-04 01:15:47.388692 | orchestrator | + osism container testbed-node-1 ps 2026-04-04 01:15:48.890154 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:15:48.890256 | orchestrator | a842809dcfda registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:15:48.890269 | orchestrator | 82bf303d2670 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:15:48.890276 | orchestrator | 6b47055d3922 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:15:48.890282 | orchestrator | 3ca3a047666d registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:15:48.890289 | orchestrator | 70e27e94f865 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:15:48.890298 | orchestrator | 4d084385dd7e registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-04 01:15:48.890304 | orchestrator | 78ec572f5130 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-04 01:15:48.890311 | orchestrator | 999beda16602 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-04 01:15:48.890317 | orchestrator | 1fda0565f931 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:15:48.890324 | orchestrator | c4bcdfbaf637 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-04 01:15:48.890331 | orchestrator | ffbb087de4ea registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-04 01:15:48.890336 | orchestrator | 71c0fe17b628 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:15:48.890342 | orchestrator | 7cf419e98678 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:15:48.890348 | orchestrator | 92dcea962b02 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:15:48.890355 | orchestrator | 6f8dc4aedfea registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:15:48.890361 | orchestrator | 9d5838d5d3e7 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:15:48.890367 | orchestrator | d13018969689 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:15:48.890394 | orchestrator | a7a6933a2178 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-04 01:15:48.890401 | orchestrator | 745f4ee9f21e registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-04 01:15:48.890408 | orchestrator | c892a0ce28ee registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-04 01:15:48.890414 | orchestrator | f6509f8707de registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-04 01:15:48.890434 | orchestrator | 79639f09fe7a registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:15:48.890445 | orchestrator | 894951fad709 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:15:48.890452 | orchestrator | 21189b116b72 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:15:48.890458 | orchestrator | b1d7a1370cb1 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:15:48.890464 | orchestrator | fa1cd7726ddd registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-04 01:15:48.890470 | orchestrator | aa9181ebaea2 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:15:48.890476 | orchestrator | 8801588a8abb registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:15:48.890483 | orchestrator | b6a05742f99d registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:15:48.890488 | orchestrator | 4251e8efa412 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-04 01:15:48.890494 | orchestrator | 0195311b90b1 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-04 01:15:48.890502 | orchestrator | f5e3ea348f22 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-04 01:15:48.890508 | orchestrator | 0937c73215d0 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-04 01:15:48.890515 | orchestrator | a23caa4e4829 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-04 01:15:48.890520 | orchestrator | 0dadd62223bd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-04-04 01:15:48.890526 | orchestrator | a5365cbc9fff registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:15:48.890538 | orchestrator | 790870088bae registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-04 01:15:48.890544 | orchestrator | c8c5d85d4652 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:15:48.890831 | orchestrator | 24695e7c4df5 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:15:48.890854 | orchestrator | e9f7e9510377 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-04 01:15:48.890862 | orchestrator | 0501b0b972af registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-04 01:15:48.890868 | orchestrator | a8c9b2eba9bf registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-04 01:15:48.890875 | orchestrator | 636e8ea6ddc7 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:15:48.890881 | orchestrator | 110d2c7032a6 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:15:48.890888 | orchestrator | 2a864ec6cfe4 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:15:48.890894 | orchestrator | a09fe935ad0e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-04 01:15:48.890907 | orchestrator | 0f6948537150 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes ovn_northd 2026-04-04 01:15:48.890914 | orchestrator | bafd39effa65 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-04 01:15:48.890920 | orchestrator | e1c7a544719b registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 22 minutes ovn_sb_db 2026-04-04 01:15:48.890927 | orchestrator | d75ca886bc99 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_nb_db 2026-04-04 01:15:48.890933 | orchestrator | 23a6ad1323c6 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) rabbitmq 2026-04-04 01:15:48.890939 | orchestrator | a14ab0c1f0e1 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:15:48.890945 | orchestrator | a42e5c440872 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-04 01:15:48.890952 | orchestrator | cc9943807908 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:15:48.890958 | orchestrator | d5d9f47eb469 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-04 01:15:48.890965 | orchestrator | c124d7a3540e registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-04 01:15:48.890980 | orchestrator | a03ac0b78127 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-04 01:15:48.890986 | orchestrator | 8f0499cf8b55 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-04 01:15:48.890992 | orchestrator | fdb99307d672 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:15:48.890998 | orchestrator | b1b290fafc57 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:15:48.891003 | orchestrator | 14f18cfe1409 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:15:49.036881 | orchestrator | 2026-04-04 01:15:49.036941 | orchestrator | ## Images @ testbed-node-1 2026-04-04 01:15:49.036952 | orchestrator | 2026-04-04 01:15:49.036959 | orchestrator | + echo 2026-04-04 01:15:49.036966 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-04 01:15:49.036974 | orchestrator | + echo 2026-04-04 01:15:49.036981 | orchestrator | + osism container testbed-node-1 images 2026-04-04 01:15:50.530721 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:15:50.530779 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:15:50.530789 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 4 days ago 277MB 2026-04-04 01:15:50.530795 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 4 days ago 427MB 2026-04-04 01:15:50.530801 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 4 days ago 288MB 2026-04-04 01:15:50.530808 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 4 days ago 277MB 2026-04-04 01:15:50.530814 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 4 days ago 350MB 2026-04-04 01:15:50.530820 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 4 days ago 683MB 2026-04-04 01:15:50.530826 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 4 days ago 285MB 2026-04-04 01:15:50.530832 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 4 days ago 284MB 2026-04-04 01:15:50.530837 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 4 days ago 284MB 2026-04-04 01:15:50.530843 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 4 days ago 463MB 2026-04-04 01:15:50.530849 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 4 days ago 293MB 2026-04-04 01:15:50.530855 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 4 days ago 293MB 2026-04-04 01:15:50.530862 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 4 days ago 309MB 2026-04-04 01:15:50.530869 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 4 days ago 317MB 2026-04-04 01:15:50.530874 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 4 days ago 312MB 2026-04-04 01:15:50.530877 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 4 days ago 303MB 2026-04-04 01:15:50.530881 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 4 days ago 368MB 2026-04-04 01:15:50.530900 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 4 days ago 1.2GB 2026-04-04 01:15:50.530907 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 4 days ago 301MB 2026-04-04 01:15:50.530923 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 4 days ago 301MB 2026-04-04 01:15:50.530928 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 4 days ago 301MB 2026-04-04 01:15:50.530931 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 4 days ago 301MB 2026-04-04 01:15:50.530935 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 4 days ago 996MB 2026-04-04 01:15:50.530939 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 4 days ago 1.12GB 2026-04-04 01:15:50.530943 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 4 days ago 1.23GB 2026-04-04 01:15:50.530947 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 4 days ago 1.39GB 2026-04-04 01:15:50.530951 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 4 days ago 1.23GB 2026-04-04 01:15:50.530954 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 4 days ago 1.23GB 2026-04-04 01:15:50.530958 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 4 days ago 1.05GB 2026-04-04 01:15:50.530962 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 4 days ago 1.05GB 2026-04-04 01:15:50.530966 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 4 days ago 1.07GB 2026-04-04 01:15:50.530969 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 4 days ago 1.05GB 2026-04-04 01:15:50.530973 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 4 days ago 1.07GB 2026-04-04 01:15:50.530977 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 4 days ago 1.43GB 2026-04-04 01:15:50.530981 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 4 days ago 1.43GB 2026-04-04 01:15:50.530995 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 4 days ago 1.79GB 2026-04-04 01:15:50.530999 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 4 days ago 1.44GB 2026-04-04 01:15:50.531002 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 4 days ago 1.06GB 2026-04-04 01:15:50.531006 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 4 days ago 1.05GB 2026-04-04 01:15:50.531010 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 4 days ago 1.09GB 2026-04-04 01:15:50.531014 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 4 days ago 1.27GB 2026-04-04 01:15:50.531017 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 4 days ago 1.15GB 2026-04-04 01:15:50.531021 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 4 days ago 1.01GB 2026-04-04 01:15:50.531025 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 4 days ago 1GB 2026-04-04 01:15:50.531028 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 4 days ago 1GB 2026-04-04 01:15:50.531032 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 4 days ago 1GB 2026-04-04 01:15:50.531036 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 4 days ago 1.01GB 2026-04-04 01:15:50.531044 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 4 days ago 1GB 2026-04-04 01:15:50.531051 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 4 days ago 1.24GB 2026-04-04 01:15:50.531057 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 4 days ago 1GB 2026-04-04 01:15:50.531063 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 4 days ago 1GB 2026-04-04 01:15:50.531067 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 4 days ago 1GB 2026-04-04 01:15:50.531070 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 4 days ago 301MB 2026-04-04 01:15:50.531074 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 5 days ago 1.54GB 2026-04-04 01:15:50.531078 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 5 days ago 1.57GB 2026-04-04 01:15:50.531082 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 5 days ago 590MB 2026-04-04 01:15:50.531086 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 5 days ago 1.04GB 2026-04-04 01:15:50.669659 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:15:50.670082 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:15:50.708514 | orchestrator | 2026-04-04 01:15:50.708562 | orchestrator | ## Containers @ testbed-node-2 2026-04-04 01:15:50.708609 | orchestrator | 2026-04-04 01:15:50.708615 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:15:50.708619 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:15:50.708631 | orchestrator | + echo 2026-04-04 01:15:50.708638 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-04 01:15:50.708645 | orchestrator | + echo 2026-04-04 01:15:50.708655 | orchestrator | + osism container testbed-node-2 ps 2026-04-04 01:15:52.126757 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:15:52.126818 | orchestrator | e246f05a868a registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:15:52.126829 | orchestrator | 6278aac3c598 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:15:52.126836 | orchestrator | df3c3701b6f5 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:15:52.126842 | orchestrator | 814975db82cd registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:15:52.126849 | orchestrator | 404347fc214a registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:15:52.126866 | orchestrator | 3615e1363c18 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-04 01:15:52.126880 | orchestrator | a9012a30a4d3 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-04 01:15:52.126886 | orchestrator | 4b9f38094e46 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-04 01:15:52.126893 | orchestrator | 6884f5383f56 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:15:52.126899 | orchestrator | b8526677bf06 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-04 01:15:52.126917 | orchestrator | 58ca79d2f7c6 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-04 01:15:52.126921 | orchestrator | a3c5728a19fc registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:15:52.126925 | orchestrator | 45714a4b565e registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:15:52.126929 | orchestrator | 66e42c24688e registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:15:52.126933 | orchestrator | 0315f992d772 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:15:52.126936 | orchestrator | 3d1313a03b6c registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:15:52.126940 | orchestrator | b3654822bb60 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:15:52.126944 | orchestrator | ed1a39e5e978 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-04 01:15:52.126948 | orchestrator | 5da062b63973 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-04 01:15:52.126951 | orchestrator | 192a8f3fdd36 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-04 01:15:52.126955 | orchestrator | fe273acfffd8 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-04 01:15:52.126968 | orchestrator | 9e56e4b93040 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:15:52.126972 | orchestrator | b27df7036018 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:15:52.126976 | orchestrator | d57b788befc1 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:15:52.126980 | orchestrator | 356354d79bfd registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:15:52.126984 | orchestrator | 6c06bad3ce7e registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-04 01:15:52.126990 | orchestrator | af54fe428ef2 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:15:52.126996 | orchestrator | 547301e54c6c registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:15:52.127002 | orchestrator | 0a9960d8791d registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:15:52.127014 | orchestrator | c97fc90f5702 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-04 01:15:52.127022 | orchestrator | 2feb0dfa77a5 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-04 01:15:52.127026 | orchestrator | 79a2d7df3c3b registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-04 01:15:52.127030 | orchestrator | 7942f20617b7 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-04 01:15:52.127037 | orchestrator | 4b4e0126597e registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-04 01:15:52.127043 | orchestrator | 16cf7aa3fd79 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2026-04-04 01:15:52.127050 | orchestrator | 91d764bfc460 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:15:52.127056 | orchestrator | 4d47f61c2e1a registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-04 01:15:52.127062 | orchestrator | dd4b258ae809 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:15:52.127069 | orchestrator | ac3943a4e354 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:15:52.127076 | orchestrator | e9b07df20398 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-04 01:15:52.127084 | orchestrator | 4b44309be2bd registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-04 01:15:52.127088 | orchestrator | 412e3c618c9a registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-04 01:15:52.127091 | orchestrator | 5a0330a187ac registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:15:52.127095 | orchestrator | 3a9231ca0323 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:15:52.127103 | orchestrator | be72472578c8 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:15:52.127107 | orchestrator | a532bc6c7b3b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-04-04 01:15:52.127110 | orchestrator | e048f5c020eb registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-04 01:15:52.127114 | orchestrator | 4a505e99acd0 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-04 01:15:52.127118 | orchestrator | 878c2e3f4b95 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) rabbitmq 2026-04-04 01:15:52.127126 | orchestrator | 79e050278a29 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 22 minutes ovn_sb_db 2026-04-04 01:15:52.127132 | orchestrator | ee1fa962f6a8 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_nb_db 2026-04-04 01:15:52.127138 | orchestrator | 65168281f576 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:15:52.127145 | orchestrator | eaab33536c26 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-04-04 01:15:52.127152 | orchestrator | eea1747e3196 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:15:52.127157 | orchestrator | df083c6033e6 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-04 01:15:52.127165 | orchestrator | fa6ac48f0618 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-04 01:15:52.127179 | orchestrator | 0d4b3c6d103b registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-04 01:15:52.127185 | orchestrator | e3947660876c registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-04 01:15:52.127191 | orchestrator | e13921a12ecf registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:15:52.127197 | orchestrator | 4942c2d1e1b3 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:15:52.127206 | orchestrator | 9e95730500d7 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:15:52.284738 | orchestrator | 2026-04-04 01:15:52.284813 | orchestrator | ## Images @ testbed-node-2 2026-04-04 01:15:52.284824 | orchestrator | 2026-04-04 01:15:52.284831 | orchestrator | + echo 2026-04-04 01:15:52.284839 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-04 01:15:52.284846 | orchestrator | + echo 2026-04-04 01:15:52.284853 | orchestrator | + osism container testbed-node-2 images 2026-04-04 01:15:53.702186 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:15:53.702246 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:15:53.702255 | orchestrator | registry.osism.tech/kolla/cron 2025.1 53831d2a110c 4 days ago 277MB 2026-04-04 01:15:53.702262 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 95a248a255b0 4 days ago 427MB 2026-04-04 01:15:53.702268 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 055a08d1d646 4 days ago 288MB 2026-04-04 01:15:53.702275 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 cac05c20af97 4 days ago 277MB 2026-04-04 01:15:53.702281 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 c170972da654 4 days ago 683MB 2026-04-04 01:15:53.702288 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 9c6c79e2e193 4 days ago 350MB 2026-04-04 01:15:53.702294 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 4951106b8b70 4 days ago 285MB 2026-04-04 01:15:53.702314 | orchestrator | registry.osism.tech/kolla/redis 2025.1 f2f3f0f280de 4 days ago 284MB 2026-04-04 01:15:53.702321 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 bac3fcf27cf1 4 days ago 284MB 2026-04-04 01:15:53.702327 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 829501547cd8 4 days ago 463MB 2026-04-04 01:15:53.702334 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 9f5afac77e5c 4 days ago 293MB 2026-04-04 01:15:53.702340 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 aef84137d109 4 days ago 293MB 2026-04-04 01:15:53.702346 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 8c8c7462421e 4 days ago 309MB 2026-04-04 01:15:53.702353 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 98c589004138 4 days ago 317MB 2026-04-04 01:15:53.702359 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 fea6a6b33ce4 4 days ago 312MB 2026-04-04 01:15:53.702365 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 77360379dc5a 4 days ago 368MB 2026-04-04 01:15:53.702371 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 5ef617a21b54 4 days ago 303MB 2026-04-04 01:15:53.702378 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 43951a9692de 4 days ago 1.2GB 2026-04-04 01:15:53.702384 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 7a60872df8bd 4 days ago 301MB 2026-04-04 01:15:53.702391 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 b22d6b5967f6 4 days ago 301MB 2026-04-04 01:15:53.702397 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b0c226bf7131 4 days ago 301MB 2026-04-04 01:15:53.702404 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 be69c3ad4ebc 4 days ago 301MB 2026-04-04 01:15:53.702410 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 960aa6a4a8de 4 days ago 996MB 2026-04-04 01:15:53.702417 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 369b7ddbf017 4 days ago 1.12GB 2026-04-04 01:15:53.702423 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 d8e83229f11e 4 days ago 1.23GB 2026-04-04 01:15:53.702429 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 3fe2b0b8cfee 4 days ago 1.39GB 2026-04-04 01:15:53.702435 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 158e57839a6b 4 days ago 1.23GB 2026-04-04 01:15:53.702441 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 5e0b67322fbf 4 days ago 1.23GB 2026-04-04 01:15:53.702458 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 e83ea289f589 4 days ago 1.05GB 2026-04-04 01:15:53.702465 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 51936f6dc571 4 days ago 1.05GB 2026-04-04 01:15:53.702472 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a8a7290762c3 4 days ago 1.07GB 2026-04-04 01:15:53.702478 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 07f1afcad488 4 days ago 1.05GB 2026-04-04 01:15:53.702484 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0d3b38b2c976 4 days ago 1.07GB 2026-04-04 01:15:53.702490 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 afd6512aefbf 4 days ago 1.43GB 2026-04-04 01:15:53.702496 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 cb4b4f730395 4 days ago 1.43GB 2026-04-04 01:15:53.702514 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 db4179e1711f 4 days ago 1.79GB 2026-04-04 01:15:53.702521 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 d0319351afef 4 days ago 1.44GB 2026-04-04 01:15:53.702537 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 1c66eb60d90d 4 days ago 1.06GB 2026-04-04 01:15:53.702543 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 f3aeb32a6011 4 days ago 1.05GB 2026-04-04 01:15:53.702550 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 d2329ba4a45d 4 days ago 1.09GB 2026-04-04 01:15:53.702556 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 d7fa3d0ffbc8 4 days ago 1.27GB 2026-04-04 01:15:53.702563 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 76d813dd9361 4 days ago 1.15GB 2026-04-04 01:15:53.702616 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 fffdc676c6f3 4 days ago 1.01GB 2026-04-04 01:15:53.702623 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 f1774da93f29 4 days ago 1GB 2026-04-04 01:15:53.702630 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 313130236671 4 days ago 1GB 2026-04-04 01:15:53.702636 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 e4df27d536ad 4 days ago 1GB 2026-04-04 01:15:53.702643 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 da5b5dd8f0f8 4 days ago 1.01GB 2026-04-04 01:15:53.702649 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 981f1c0984fd 4 days ago 1GB 2026-04-04 01:15:53.702656 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 77837382f9b4 4 days ago 1.24GB 2026-04-04 01:15:53.702662 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 2f7505ba4454 4 days ago 1GB 2026-04-04 01:15:53.702668 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 68100f4cfa52 4 days ago 1GB 2026-04-04 01:15:53.702678 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 6e6f7bfebcca 4 days ago 1GB 2026-04-04 01:15:53.702684 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 33ee60a7efe8 4 days ago 301MB 2026-04-04 01:15:53.702691 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 d1fff501712d 5 days ago 1.54GB 2026-04-04 01:15:53.702697 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 26c9d21ae7a0 5 days ago 1.57GB 2026-04-04 01:15:53.702704 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 16094ab8b9a7 5 days ago 590MB 2026-04-04 01:15:53.702710 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 797914887ee8 5 days ago 1.04GB 2026-04-04 01:15:53.843845 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-04 01:15:53.853522 | orchestrator | + set -e 2026-04-04 01:15:53.853606 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:15:53.854885 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:15:53.854930 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:15:53.854938 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:15:53.854944 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:15:53.854950 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:15:53.854958 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:15:53.854964 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:15:53.854970 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:15:53.854977 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 01:15:53.854984 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 01:15:53.854991 | orchestrator | ++ export ARA=false 2026-04-04 01:15:53.854997 | orchestrator | ++ ARA=false 2026-04-04 01:15:53.855004 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:15:53.855010 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:15:53.855015 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:15:53.855022 | orchestrator | ++ TEMPEST=true 2026-04-04 01:15:53.855373 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:15:53.855458 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:15:53.855476 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:15:53.855488 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:15:53.855525 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:15:53.855537 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:15:53.855548 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:15:53.855559 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:15:53.855597 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:15:53.855610 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:15:53.855622 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:15:53.855633 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:15:53.855645 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 01:15:53.855656 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-04 01:15:53.866785 | orchestrator | + set -e 2026-04-04 01:15:53.866864 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:15:53.866876 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:15:53.866887 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:15:53.866897 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:15:53.866907 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:15:53.866917 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:15:53.867964 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:15:53.871822 | orchestrator | 2026-04-04 01:15:53.871877 | orchestrator | # Ceph status 2026-04-04 01:15:53.871885 | orchestrator | 2026-04-04 01:15:53.871892 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:15:53.871899 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:15:53.871906 | orchestrator | + echo 2026-04-04 01:15:53.871913 | orchestrator | + echo '# Ceph status' 2026-04-04 01:15:53.871919 | orchestrator | + echo 2026-04-04 01:15:53.871925 | orchestrator | + ceph -s 2026-04-04 01:15:54.423932 | orchestrator | cluster: 2026-04-04 01:15:54.424004 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-04 01:15:54.424011 | orchestrator | health: HEALTH_OK 2026-04-04 01:15:54.424016 | orchestrator | 2026-04-04 01:15:54.424020 | orchestrator | services: 2026-04-04 01:15:54.424024 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-04 01:15:54.424032 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-2, testbed-node-1 2026-04-04 01:15:54.424040 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-04 01:15:54.424047 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-04 01:15:54.424053 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-04 01:15:54.424060 | orchestrator | 2026-04-04 01:15:54.424066 | orchestrator | data: 2026-04-04 01:15:54.424075 | orchestrator | volumes: 1/1 healthy 2026-04-04 01:15:54.424083 | orchestrator | pools: 14 pools, 401 pgs 2026-04-04 01:15:54.424090 | orchestrator | objects: 555 objects, 2.2 GiB 2026-04-04 01:15:54.424097 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-04 01:15:54.424103 | orchestrator | pgs: 401 active+clean 2026-04-04 01:15:54.424110 | orchestrator | 2026-04-04 01:15:54.466745 | orchestrator | 2026-04-04 01:15:54.466811 | orchestrator | # Ceph versions 2026-04-04 01:15:54.466818 | orchestrator | 2026-04-04 01:15:54.466823 | orchestrator | + echo 2026-04-04 01:15:54.466828 | orchestrator | + echo '# Ceph versions' 2026-04-04 01:15:54.466834 | orchestrator | + echo 2026-04-04 01:15:54.466838 | orchestrator | + ceph versions 2026-04-04 01:15:55.029671 | orchestrator | { 2026-04-04 01:15:55.029759 | orchestrator | "mon": { 2026-04-04 01:15:55.029769 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:15:55.029777 | orchestrator | }, 2026-04-04 01:15:55.029784 | orchestrator | "mgr": { 2026-04-04 01:15:55.029792 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:15:55.029799 | orchestrator | }, 2026-04-04 01:15:55.029805 | orchestrator | "osd": { 2026-04-04 01:15:55.029812 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-04 01:15:55.029819 | orchestrator | }, 2026-04-04 01:15:55.029825 | orchestrator | "mds": { 2026-04-04 01:15:55.029832 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:15:55.029838 | orchestrator | }, 2026-04-04 01:15:55.029845 | orchestrator | "rgw": { 2026-04-04 01:15:55.029852 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:15:55.029858 | orchestrator | }, 2026-04-04 01:15:55.029865 | orchestrator | "overall": { 2026-04-04 01:15:55.029872 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-04 01:15:55.029905 | orchestrator | } 2026-04-04 01:15:55.029912 | orchestrator | } 2026-04-04 01:15:55.075043 | orchestrator | 2026-04-04 01:15:55.075112 | orchestrator | # Ceph OSD tree 2026-04-04 01:15:55.075118 | orchestrator | 2026-04-04 01:15:55.075122 | orchestrator | + echo 2026-04-04 01:15:55.075127 | orchestrator | + echo '# Ceph OSD tree' 2026-04-04 01:15:55.075132 | orchestrator | + echo 2026-04-04 01:15:55.075136 | orchestrator | + ceph osd df tree 2026-04-04 01:15:55.576489 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-04 01:15:55.576630 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-04-04 01:15:55.576641 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-04-04 01:15:55.576645 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.41 1.08 199 up osd.0 2026-04-04 01:15:55.576649 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.42 0.92 193 up osd.5 2026-04-04 01:15:55.576654 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-04 01:15:55.576658 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 18 GiB 7.41 1.25 184 up osd.1 2026-04-04 01:15:55.576661 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 905 MiB 835 MiB 1 KiB 70 MiB 19 GiB 4.42 0.75 204 up osd.3 2026-04-04 01:15:55.576665 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-04 01:15:55.576669 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 8.02 1.36 195 up osd.2 2026-04-04 01:15:55.576673 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 780 MiB 707 MiB 1 KiB 74 MiB 19 GiB 3.81 0.64 195 up osd.4 2026-04-04 01:15:55.576676 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-04-04 01:15:55.576681 | orchestrator | MIN/MAX VAR: 0.64/1.36 STDDEV: 1.52 2026-04-04 01:15:55.623115 | orchestrator | 2026-04-04 01:15:55.623182 | orchestrator | # Ceph monitor status 2026-04-04 01:15:55.623190 | orchestrator | 2026-04-04 01:15:55.623195 | orchestrator | + echo 2026-04-04 01:15:55.623200 | orchestrator | + echo '# Ceph monitor status' 2026-04-04 01:15:55.623205 | orchestrator | + echo 2026-04-04 01:15:55.623210 | orchestrator | + ceph mon stat 2026-04-04 01:15:56.223401 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-04 01:15:56.271269 | orchestrator | 2026-04-04 01:15:56.271368 | orchestrator | # Ceph quorum status 2026-04-04 01:15:56.271378 | orchestrator | 2026-04-04 01:15:56.271383 | orchestrator | + echo 2026-04-04 01:15:56.271389 | orchestrator | + echo '# Ceph quorum status' 2026-04-04 01:15:56.271394 | orchestrator | + echo 2026-04-04 01:15:56.271441 | orchestrator | + ceph quorum_status 2026-04-04 01:15:56.271775 | orchestrator | + jq 2026-04-04 01:15:56.882233 | orchestrator | { 2026-04-04 01:15:56.882328 | orchestrator | "election_epoch": 8, 2026-04-04 01:15:56.882337 | orchestrator | "quorum": [ 2026-04-04 01:15:56.882345 | orchestrator | 0, 2026-04-04 01:15:56.882351 | orchestrator | 1, 2026-04-04 01:15:56.882357 | orchestrator | 2 2026-04-04 01:15:56.882364 | orchestrator | ], 2026-04-04 01:15:56.882371 | orchestrator | "quorum_names": [ 2026-04-04 01:15:56.882377 | orchestrator | "testbed-node-0", 2026-04-04 01:15:56.882384 | orchestrator | "testbed-node-1", 2026-04-04 01:15:56.882391 | orchestrator | "testbed-node-2" 2026-04-04 01:15:56.882396 | orchestrator | ], 2026-04-04 01:15:56.882403 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-04 01:15:56.882410 | orchestrator | "quorum_age": 1559, 2026-04-04 01:15:56.882416 | orchestrator | "features": { 2026-04-04 01:15:56.882422 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-04 01:15:56.882450 | orchestrator | "quorum_mon": [ 2026-04-04 01:15:56.882456 | orchestrator | "kraken", 2026-04-04 01:15:56.882463 | orchestrator | "luminous", 2026-04-04 01:15:56.882470 | orchestrator | "mimic", 2026-04-04 01:15:56.882476 | orchestrator | "osdmap-prune", 2026-04-04 01:15:56.882482 | orchestrator | "nautilus", 2026-04-04 01:15:56.882487 | orchestrator | "octopus", 2026-04-04 01:15:56.882493 | orchestrator | "pacific", 2026-04-04 01:15:56.882499 | orchestrator | "elector-pinging", 2026-04-04 01:15:56.882505 | orchestrator | "quincy", 2026-04-04 01:15:56.882511 | orchestrator | "reef" 2026-04-04 01:15:56.882518 | orchestrator | ] 2026-04-04 01:15:56.882523 | orchestrator | }, 2026-04-04 01:15:56.882530 | orchestrator | "monmap": { 2026-04-04 01:15:56.882536 | orchestrator | "epoch": 1, 2026-04-04 01:15:56.882542 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-04 01:15:56.882550 | orchestrator | "modified": "2026-04-04T00:49:39.496219Z", 2026-04-04 01:15:56.882556 | orchestrator | "created": "2026-04-04T00:49:39.496219Z", 2026-04-04 01:15:56.882563 | orchestrator | "min_mon_release": 18, 2026-04-04 01:15:56.882624 | orchestrator | "min_mon_release_name": "reef", 2026-04-04 01:15:56.882631 | orchestrator | "election_strategy": 1, 2026-04-04 01:15:56.882637 | orchestrator | "disallowed_leaders": "", 2026-04-04 01:15:56.882643 | orchestrator | "stretch_mode": false, 2026-04-04 01:15:56.882649 | orchestrator | "tiebreaker_mon": "", 2026-04-04 01:15:56.882656 | orchestrator | "removed_ranks": "", 2026-04-04 01:15:56.882662 | orchestrator | "features": { 2026-04-04 01:15:56.882668 | orchestrator | "persistent": [ 2026-04-04 01:15:56.882675 | orchestrator | "kraken", 2026-04-04 01:15:56.882681 | orchestrator | "luminous", 2026-04-04 01:15:56.882687 | orchestrator | "mimic", 2026-04-04 01:15:56.882693 | orchestrator | "osdmap-prune", 2026-04-04 01:15:56.882700 | orchestrator | "nautilus", 2026-04-04 01:15:56.882706 | orchestrator | "octopus", 2026-04-04 01:15:56.882712 | orchestrator | "pacific", 2026-04-04 01:15:56.882718 | orchestrator | "elector-pinging", 2026-04-04 01:15:56.882724 | orchestrator | "quincy", 2026-04-04 01:15:56.882731 | orchestrator | "reef" 2026-04-04 01:15:56.882737 | orchestrator | ], 2026-04-04 01:15:56.882743 | orchestrator | "optional": [] 2026-04-04 01:15:56.882749 | orchestrator | }, 2026-04-04 01:15:56.882755 | orchestrator | "mons": [ 2026-04-04 01:15:56.882762 | orchestrator | { 2026-04-04 01:15:56.882768 | orchestrator | "rank": 0, 2026-04-04 01:15:56.882775 | orchestrator | "name": "testbed-node-0", 2026-04-04 01:15:56.882782 | orchestrator | "public_addrs": { 2026-04-04 01:15:56.882788 | orchestrator | "addrvec": [ 2026-04-04 01:15:56.882794 | orchestrator | { 2026-04-04 01:15:56.882799 | orchestrator | "type": "v2", 2026-04-04 01:15:56.882806 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-04 01:15:56.882812 | orchestrator | "nonce": 0 2026-04-04 01:15:56.882818 | orchestrator | }, 2026-04-04 01:15:56.882825 | orchestrator | { 2026-04-04 01:15:56.882832 | orchestrator | "type": "v1", 2026-04-04 01:15:56.882838 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-04 01:15:56.882845 | orchestrator | "nonce": 0 2026-04-04 01:15:56.882851 | orchestrator | } 2026-04-04 01:15:56.882858 | orchestrator | ] 2026-04-04 01:15:56.882864 | orchestrator | }, 2026-04-04 01:15:56.882871 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-04 01:15:56.882878 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-04 01:15:56.882884 | orchestrator | "priority": 0, 2026-04-04 01:15:56.882890 | orchestrator | "weight": 0, 2026-04-04 01:15:56.882897 | orchestrator | "crush_location": "{}" 2026-04-04 01:15:56.882903 | orchestrator | }, 2026-04-04 01:15:56.882910 | orchestrator | { 2026-04-04 01:15:56.882917 | orchestrator | "rank": 1, 2026-04-04 01:15:56.882923 | orchestrator | "name": "testbed-node-1", 2026-04-04 01:15:56.882930 | orchestrator | "public_addrs": { 2026-04-04 01:15:56.882936 | orchestrator | "addrvec": [ 2026-04-04 01:15:56.882943 | orchestrator | { 2026-04-04 01:15:56.882949 | orchestrator | "type": "v2", 2026-04-04 01:15:56.882956 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-04 01:15:56.882963 | orchestrator | "nonce": 0 2026-04-04 01:15:56.882969 | orchestrator | }, 2026-04-04 01:15:56.882976 | orchestrator | { 2026-04-04 01:15:56.882983 | orchestrator | "type": "v1", 2026-04-04 01:15:56.882990 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-04 01:15:56.882996 | orchestrator | "nonce": 0 2026-04-04 01:15:56.883010 | orchestrator | } 2026-04-04 01:15:56.883017 | orchestrator | ] 2026-04-04 01:15:56.883024 | orchestrator | }, 2026-04-04 01:15:56.883030 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-04 01:15:56.883037 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-04 01:15:56.883044 | orchestrator | "priority": 0, 2026-04-04 01:15:56.883050 | orchestrator | "weight": 0, 2026-04-04 01:15:56.883057 | orchestrator | "crush_location": "{}" 2026-04-04 01:15:56.883064 | orchestrator | }, 2026-04-04 01:15:56.883070 | orchestrator | { 2026-04-04 01:15:56.883076 | orchestrator | "rank": 2, 2026-04-04 01:15:56.883083 | orchestrator | "name": "testbed-node-2", 2026-04-04 01:15:56.883089 | orchestrator | "public_addrs": { 2026-04-04 01:15:56.883102 | orchestrator | "addrvec": [ 2026-04-04 01:15:56.883111 | orchestrator | { 2026-04-04 01:15:56.883117 | orchestrator | "type": "v2", 2026-04-04 01:15:56.883123 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-04 01:15:56.883129 | orchestrator | "nonce": 0 2026-04-04 01:15:56.883134 | orchestrator | }, 2026-04-04 01:15:56.883141 | orchestrator | { 2026-04-04 01:15:56.883147 | orchestrator | "type": "v1", 2026-04-04 01:15:56.883153 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-04 01:15:56.883159 | orchestrator | "nonce": 0 2026-04-04 01:15:56.883165 | orchestrator | } 2026-04-04 01:15:56.883172 | orchestrator | ] 2026-04-04 01:15:56.883178 | orchestrator | }, 2026-04-04 01:15:56.883185 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-04 01:15:56.883191 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-04 01:15:56.883198 | orchestrator | "priority": 0, 2026-04-04 01:15:56.883204 | orchestrator | "weight": 0, 2026-04-04 01:15:56.883210 | orchestrator | "crush_location": "{}" 2026-04-04 01:15:56.883216 | orchestrator | } 2026-04-04 01:15:56.883222 | orchestrator | ] 2026-04-04 01:15:56.883228 | orchestrator | } 2026-04-04 01:15:56.883234 | orchestrator | } 2026-04-04 01:15:56.883240 | orchestrator | 2026-04-04 01:15:56.883246 | orchestrator | # Ceph free space status 2026-04-04 01:15:56.883251 | orchestrator | 2026-04-04 01:15:56.883257 | orchestrator | + echo 2026-04-04 01:15:56.883263 | orchestrator | + echo '# Ceph free space status' 2026-04-04 01:15:56.883269 | orchestrator | + echo 2026-04-04 01:15:56.883276 | orchestrator | + ceph df 2026-04-04 01:15:57.456949 | orchestrator | --- RAW STORAGE --- 2026-04-04 01:15:57.457050 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-04 01:15:57.457076 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-04 01:15:57.457083 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-04 01:15:57.457089 | orchestrator | 2026-04-04 01:15:57.457095 | orchestrator | --- POOLS --- 2026-04-04 01:15:57.457101 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-04 01:15:57.457108 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-04 01:15:57.457114 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:15:57.457119 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-04 01:15:57.457125 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:15:57.457130 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:15:57.457136 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-04 01:15:57.457142 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-04 01:15:57.457147 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:15:57.457153 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2026-04-04 01:15:57.457158 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:15:57.457163 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:15:57.457184 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 6.00 35 GiB 2026-04-04 01:15:57.457193 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:15:57.457201 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:15:57.501698 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:15:57.546417 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:15:57.546487 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:15:57.546494 | orchestrator | + osism apply facts 2026-04-04 01:16:08.971115 | orchestrator | 2026-04-04 01:16:08 | INFO  | Prepare task for execution of facts. 2026-04-04 01:16:09.046241 | orchestrator | 2026-04-04 01:16:09 | INFO  | Task 3101deec-451d-44d8-a4eb-b2a14a26cb2a (facts) was prepared for execution. 2026-04-04 01:16:09.046309 | orchestrator | 2026-04-04 01:16:09 | INFO  | It takes a moment until task 3101deec-451d-44d8-a4eb-b2a14a26cb2a (facts) has been started and output is visible here. 2026-04-04 01:16:20.868711 | orchestrator | 2026-04-04 01:16:20.868780 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 01:16:20.868791 | orchestrator | 2026-04-04 01:16:20.868798 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 01:16:20.868805 | orchestrator | Saturday 04 April 2026 01:16:12 +0000 (0:00:00.348) 0:00:00.348 ******** 2026-04-04 01:16:20.868812 | orchestrator | ok: [testbed-manager] 2026-04-04 01:16:20.868820 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:20.868827 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:20.868834 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:20.868840 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:16:20.868846 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:16:20.868853 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:16:20.868860 | orchestrator | 2026-04-04 01:16:20.868866 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 01:16:20.868873 | orchestrator | Saturday 04 April 2026 01:16:13 +0000 (0:00:01.490) 0:00:01.839 ******** 2026-04-04 01:16:20.868880 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:16:20.868887 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:20.868894 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:20.868900 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:20.868907 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:16:20.868913 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:16:20.868920 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:16:20.868926 | orchestrator | 2026-04-04 01:16:20.868933 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 01:16:20.868940 | orchestrator | 2026-04-04 01:16:20.868946 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 01:16:20.868953 | orchestrator | Saturday 04 April 2026 01:16:15 +0000 (0:00:01.249) 0:00:03.088 ******** 2026-04-04 01:16:20.868960 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:20.868966 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:20.868973 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:20.868980 | orchestrator | ok: [testbed-manager] 2026-04-04 01:16:20.868986 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:16:20.868993 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:16:20.868999 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:16:20.869006 | orchestrator | 2026-04-04 01:16:20.869013 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 01:16:20.869019 | orchestrator | 2026-04-04 01:16:20.869026 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 01:16:20.869033 | orchestrator | Saturday 04 April 2026 01:16:19 +0000 (0:00:04.674) 0:00:07.763 ******** 2026-04-04 01:16:20.869040 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:16:20.869046 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:20.869053 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:20.869060 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:20.869066 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:16:20.869073 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:16:20.869079 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:16:20.869086 | orchestrator | 2026-04-04 01:16:20.869092 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:16:20.869099 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869125 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869132 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869140 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869152 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869163 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869174 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:20.869184 | orchestrator | 2026-04-04 01:16:20.869194 | orchestrator | 2026-04-04 01:16:20.869205 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:16:20.869216 | orchestrator | Saturday 04 April 2026 01:16:20 +0000 (0:00:00.760) 0:00:08.524 ******** 2026-04-04 01:16:20.869227 | orchestrator | =============================================================================== 2026-04-04 01:16:20.869238 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2026-04-04 01:16:20.869250 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2026-04-04 01:16:20.869262 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-04-04 01:16:20.869274 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2026-04-04 01:16:21.041034 | orchestrator | + osism validate ceph-mons 2026-04-04 01:16:52.026879 | orchestrator | 2026-04-04 01:16:52.026973 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-04 01:16:52.026986 | orchestrator | 2026-04-04 01:16:52.026994 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:16:52.027001 | orchestrator | Saturday 04 April 2026 01:16:35 +0000 (0:00:00.512) 0:00:00.512 ******** 2026-04-04 01:16:52.027009 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.027016 | orchestrator | 2026-04-04 01:16:52.027038 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:16:52.027046 | orchestrator | Saturday 04 April 2026 01:16:36 +0000 (0:00:00.979) 0:00:01.491 ******** 2026-04-04 01:16:52.027053 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.027060 | orchestrator | 2026-04-04 01:16:52.027068 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:16:52.027075 | orchestrator | Saturday 04 April 2026 01:16:37 +0000 (0:00:00.650) 0:00:02.142 ******** 2026-04-04 01:16:52.027083 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027090 | orchestrator | 2026-04-04 01:16:52.027097 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-04 01:16:52.027103 | orchestrator | Saturday 04 April 2026 01:16:37 +0000 (0:00:00.140) 0:00:02.282 ******** 2026-04-04 01:16:52.027109 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027117 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:52.027123 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:52.027129 | orchestrator | 2026-04-04 01:16:52.027135 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-04 01:16:52.027142 | orchestrator | Saturday 04 April 2026 01:16:38 +0000 (0:00:00.280) 0:00:02.563 ******** 2026-04-04 01:16:52.027148 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:52.027155 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:52.027161 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027187 | orchestrator | 2026-04-04 01:16:52.027194 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-04 01:16:52.027200 | orchestrator | Saturday 04 April 2026 01:16:39 +0000 (0:00:01.584) 0:00:04.147 ******** 2026-04-04 01:16:52.027206 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027212 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:52.027218 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:52.027225 | orchestrator | 2026-04-04 01:16:52.027231 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-04 01:16:52.027237 | orchestrator | Saturday 04 April 2026 01:16:39 +0000 (0:00:00.291) 0:00:04.439 ******** 2026-04-04 01:16:52.027243 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027250 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:52.027256 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:52.027262 | orchestrator | 2026-04-04 01:16:52.027268 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:16:52.027275 | orchestrator | Saturday 04 April 2026 01:16:40 +0000 (0:00:00.290) 0:00:04.729 ******** 2026-04-04 01:16:52.027281 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027287 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:52.027294 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:52.027300 | orchestrator | 2026-04-04 01:16:52.027307 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-04 01:16:52.027315 | orchestrator | Saturday 04 April 2026 01:16:40 +0000 (0:00:00.307) 0:00:05.037 ******** 2026-04-04 01:16:52.027322 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027330 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:52.027337 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:52.027342 | orchestrator | 2026-04-04 01:16:52.027345 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-04 01:16:52.027349 | orchestrator | Saturday 04 April 2026 01:16:40 +0000 (0:00:00.453) 0:00:05.490 ******** 2026-04-04 01:16:52.027353 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027357 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:52.027361 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:52.027365 | orchestrator | 2026-04-04 01:16:52.027369 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:16:52.027373 | orchestrator | Saturday 04 April 2026 01:16:41 +0000 (0:00:00.307) 0:00:05.798 ******** 2026-04-04 01:16:52.027377 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027381 | orchestrator | 2026-04-04 01:16:52.027385 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:16:52.027389 | orchestrator | Saturday 04 April 2026 01:16:41 +0000 (0:00:00.232) 0:00:06.030 ******** 2026-04-04 01:16:52.027392 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027396 | orchestrator | 2026-04-04 01:16:52.027400 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:16:52.027404 | orchestrator | Saturday 04 April 2026 01:16:41 +0000 (0:00:00.246) 0:00:06.277 ******** 2026-04-04 01:16:52.027407 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027411 | orchestrator | 2026-04-04 01:16:52.027416 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:52.027420 | orchestrator | Saturday 04 April 2026 01:16:41 +0000 (0:00:00.255) 0:00:06.532 ******** 2026-04-04 01:16:52.027425 | orchestrator | 2026-04-04 01:16:52.027430 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:52.027434 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.069) 0:00:06.602 ******** 2026-04-04 01:16:52.027439 | orchestrator | 2026-04-04 01:16:52.027445 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:52.027451 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.067) 0:00:06.670 ******** 2026-04-04 01:16:52.027457 | orchestrator | 2026-04-04 01:16:52.027463 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:16:52.027473 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.223) 0:00:06.893 ******** 2026-04-04 01:16:52.027488 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027494 | orchestrator | 2026-04-04 01:16:52.027506 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-04 01:16:52.027513 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.250) 0:00:07.144 ******** 2026-04-04 01:16:52.027518 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027524 | orchestrator | 2026-04-04 01:16:52.027547 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-04 01:16:52.027614 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.254) 0:00:07.399 ******** 2026-04-04 01:16:52.027623 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027629 | orchestrator | 2026-04-04 01:16:52.027636 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-04 01:16:52.027642 | orchestrator | Saturday 04 April 2026 01:16:42 +0000 (0:00:00.128) 0:00:07.527 ******** 2026-04-04 01:16:52.027648 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:16:52.027654 | orchestrator | 2026-04-04 01:16:52.027661 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-04 01:16:52.027666 | orchestrator | Saturday 04 April 2026 01:16:44 +0000 (0:00:01.774) 0:00:09.302 ******** 2026-04-04 01:16:52.027672 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027678 | orchestrator | 2026-04-04 01:16:52.027684 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-04 01:16:52.027689 | orchestrator | Saturday 04 April 2026 01:16:45 +0000 (0:00:00.298) 0:00:09.600 ******** 2026-04-04 01:16:52.027696 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027701 | orchestrator | 2026-04-04 01:16:52.027707 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-04 01:16:52.027713 | orchestrator | Saturday 04 April 2026 01:16:45 +0000 (0:00:00.121) 0:00:09.722 ******** 2026-04-04 01:16:52.027719 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027725 | orchestrator | 2026-04-04 01:16:52.027730 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-04 01:16:52.027736 | orchestrator | Saturday 04 April 2026 01:16:45 +0000 (0:00:00.314) 0:00:10.037 ******** 2026-04-04 01:16:52.027742 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027748 | orchestrator | 2026-04-04 01:16:52.027754 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-04 01:16:52.027760 | orchestrator | Saturday 04 April 2026 01:16:45 +0000 (0:00:00.290) 0:00:10.328 ******** 2026-04-04 01:16:52.027766 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027771 | orchestrator | 2026-04-04 01:16:52.027777 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-04 01:16:52.027783 | orchestrator | Saturday 04 April 2026 01:16:45 +0000 (0:00:00.109) 0:00:10.437 ******** 2026-04-04 01:16:52.027789 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027795 | orchestrator | 2026-04-04 01:16:52.027801 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-04 01:16:52.027807 | orchestrator | Saturday 04 April 2026 01:16:46 +0000 (0:00:00.128) 0:00:10.566 ******** 2026-04-04 01:16:52.027813 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027818 | orchestrator | 2026-04-04 01:16:52.027825 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-04 01:16:52.027830 | orchestrator | Saturday 04 April 2026 01:16:46 +0000 (0:00:00.294) 0:00:10.860 ******** 2026-04-04 01:16:52.027836 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:16:52.027842 | orchestrator | 2026-04-04 01:16:52.027848 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-04 01:16:52.027854 | orchestrator | Saturday 04 April 2026 01:16:47 +0000 (0:00:01.521) 0:00:12.382 ******** 2026-04-04 01:16:52.027859 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027865 | orchestrator | 2026-04-04 01:16:52.027871 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-04 01:16:52.027877 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.332) 0:00:12.715 ******** 2026-04-04 01:16:52.027889 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027895 | orchestrator | 2026-04-04 01:16:52.027901 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-04 01:16:52.027907 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.135) 0:00:12.850 ******** 2026-04-04 01:16:52.027913 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:52.027919 | orchestrator | 2026-04-04 01:16:52.027924 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-04 01:16:52.027931 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.162) 0:00:13.012 ******** 2026-04-04 01:16:52.027936 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027942 | orchestrator | 2026-04-04 01:16:52.027948 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-04 01:16:52.027954 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.129) 0:00:13.142 ******** 2026-04-04 01:16:52.027960 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.027969 | orchestrator | 2026-04-04 01:16:52.027975 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:16:52.027981 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.131) 0:00:13.273 ******** 2026-04-04 01:16:52.027987 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.027994 | orchestrator | 2026-04-04 01:16:52.028000 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:16:52.028005 | orchestrator | Saturday 04 April 2026 01:16:48 +0000 (0:00:00.253) 0:00:13.526 ******** 2026-04-04 01:16:52.028011 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:52.028017 | orchestrator | 2026-04-04 01:16:52.028023 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:16:52.028029 | orchestrator | Saturday 04 April 2026 01:16:49 +0000 (0:00:00.247) 0:00:13.774 ******** 2026-04-04 01:16:52.028035 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.028041 | orchestrator | 2026-04-04 01:16:52.028046 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:16:52.028052 | orchestrator | Saturday 04 April 2026 01:16:51 +0000 (0:00:01.892) 0:00:15.667 ******** 2026-04-04 01:16:52.028058 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.028065 | orchestrator | 2026-04-04 01:16:52.028071 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:16:52.028077 | orchestrator | Saturday 04 April 2026 01:16:51 +0000 (0:00:00.267) 0:00:15.934 ******** 2026-04-04 01:16:52.028083 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:52.028088 | orchestrator | 2026-04-04 01:16:52.028100 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:54.245891 | orchestrator | Saturday 04 April 2026 01:16:52 +0000 (0:00:00.620) 0:00:16.554 ******** 2026-04-04 01:16:54.245968 | orchestrator | 2026-04-04 01:16:54.245975 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:54.245980 | orchestrator | Saturday 04 April 2026 01:16:52 +0000 (0:00:00.074) 0:00:16.629 ******** 2026-04-04 01:16:54.245984 | orchestrator | 2026-04-04 01:16:54.245988 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:16:54.245992 | orchestrator | Saturday 04 April 2026 01:16:52 +0000 (0:00:00.073) 0:00:16.703 ******** 2026-04-04 01:16:54.245996 | orchestrator | 2026-04-04 01:16:54.246000 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:16:54.246004 | orchestrator | Saturday 04 April 2026 01:16:52 +0000 (0:00:00.073) 0:00:16.776 ******** 2026-04-04 01:16:54.246008 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:16:54.246047 | orchestrator | 2026-04-04 01:16:54.246052 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:16:54.246056 | orchestrator | Saturday 04 April 2026 01:16:53 +0000 (0:00:01.295) 0:00:18.072 ******** 2026-04-04 01:16:54.246078 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:16:54.246082 | orchestrator |  "msg": [ 2026-04-04 01:16:54.246087 | orchestrator |  "Validator run completed.", 2026-04-04 01:16:54.246092 | orchestrator |  "You can find the report file here:", 2026-04-04 01:16:54.246105 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-04T01:16:36+00:00-report.json", 2026-04-04 01:16:54.246111 | orchestrator |  "on the following host:", 2026-04-04 01:16:54.246115 | orchestrator |  "testbed-manager" 2026-04-04 01:16:54.246119 | orchestrator |  ] 2026-04-04 01:16:54.246124 | orchestrator | } 2026-04-04 01:16:54.246128 | orchestrator | 2026-04-04 01:16:54.246133 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:16:54.246141 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:16:54.246149 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:54.246161 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:54.246171 | orchestrator | 2026-04-04 01:16:54.246177 | orchestrator | 2026-04-04 01:16:54.246183 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:16:54.246189 | orchestrator | Saturday 04 April 2026 01:16:53 +0000 (0:00:00.411) 0:00:18.483 ******** 2026-04-04 01:16:54.246195 | orchestrator | =============================================================================== 2026-04-04 01:16:54.246202 | orchestrator | Aggregate test results step one ----------------------------------------- 1.89s 2026-04-04 01:16:54.246223 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.77s 2026-04-04 01:16:54.246229 | orchestrator | Get container info ------------------------------------------------------ 1.58s 2026-04-04 01:16:54.246235 | orchestrator | Gather status data ------------------------------------------------------ 1.52s 2026-04-04 01:16:54.246242 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-04-04 01:16:54.246247 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-04 01:16:54.246254 | orchestrator | Create report output directory ------------------------------------------ 0.65s 2026-04-04 01:16:54.246260 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2026-04-04 01:16:54.246266 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.45s 2026-04-04 01:16:54.246272 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-04 01:16:54.246279 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-04 01:16:54.246285 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-04-04 01:16:54.246291 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-04-04 01:16:54.246296 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2026-04-04 01:16:54.246303 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-04 01:16:54.246309 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2026-04-04 01:16:54.246315 | orchestrator | Prepare status test vars ------------------------------------------------ 0.29s 2026-04-04 01:16:54.246319 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-04-04 01:16:54.246323 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-04-04 01:16:54.246327 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-04-04 01:16:54.469071 | orchestrator | + osism validate ceph-mgrs 2026-04-04 01:17:24.106929 | orchestrator | 2026-04-04 01:17:24.107016 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-04 01:17:24.107039 | orchestrator | 2026-04-04 01:17:24.107045 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:17:24.107059 | orchestrator | Saturday 04 April 2026 01:17:09 +0000 (0:00:00.513) 0:00:00.513 ******** 2026-04-04 01:17:24.107088 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107093 | orchestrator | 2026-04-04 01:17:24.107097 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:17:24.107101 | orchestrator | Saturday 04 April 2026 01:17:10 +0000 (0:00:00.998) 0:00:01.512 ******** 2026-04-04 01:17:24.107106 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107110 | orchestrator | 2026-04-04 01:17:24.107114 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:17:24.107118 | orchestrator | Saturday 04 April 2026 01:17:11 +0000 (0:00:00.694) 0:00:02.206 ******** 2026-04-04 01:17:24.107121 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107134 | orchestrator | 2026-04-04 01:17:24.107139 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-04 01:17:24.107143 | orchestrator | Saturday 04 April 2026 01:17:11 +0000 (0:00:00.130) 0:00:02.336 ******** 2026-04-04 01:17:24.107147 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107158 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:17:24.107164 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:17:24.107177 | orchestrator | 2026-04-04 01:17:24.107182 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-04 01:17:24.107188 | orchestrator | Saturday 04 April 2026 01:17:11 +0000 (0:00:00.275) 0:00:02.612 ******** 2026-04-04 01:17:24.107194 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107199 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:17:24.107205 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:17:24.107211 | orchestrator | 2026-04-04 01:17:24.107217 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-04 01:17:24.107222 | orchestrator | Saturday 04 April 2026 01:17:13 +0000 (0:00:01.689) 0:00:04.301 ******** 2026-04-04 01:17:24.107228 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107233 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:17:24.107238 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:17:24.107244 | orchestrator | 2026-04-04 01:17:24.107249 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-04 01:17:24.107254 | orchestrator | Saturday 04 April 2026 01:17:13 +0000 (0:00:00.303) 0:00:04.605 ******** 2026-04-04 01:17:24.107260 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107266 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:17:24.107272 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:17:24.107277 | orchestrator | 2026-04-04 01:17:24.107283 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:17:24.107289 | orchestrator | Saturday 04 April 2026 01:17:13 +0000 (0:00:00.305) 0:00:04.910 ******** 2026-04-04 01:17:24.107296 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107302 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:17:24.107309 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:17:24.107314 | orchestrator | 2026-04-04 01:17:24.107320 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-04 01:17:24.107325 | orchestrator | Saturday 04 April 2026 01:17:14 +0000 (0:00:00.330) 0:00:05.240 ******** 2026-04-04 01:17:24.107330 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107336 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:17:24.107342 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:17:24.107347 | orchestrator | 2026-04-04 01:17:24.107353 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-04 01:17:24.107360 | orchestrator | Saturday 04 April 2026 01:17:14 +0000 (0:00:00.510) 0:00:05.750 ******** 2026-04-04 01:17:24.107366 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107373 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:17:24.107378 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:17:24.107390 | orchestrator | 2026-04-04 01:17:24.107394 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:17:24.107398 | orchestrator | Saturday 04 April 2026 01:17:14 +0000 (0:00:00.300) 0:00:06.051 ******** 2026-04-04 01:17:24.107401 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107405 | orchestrator | 2026-04-04 01:17:24.107409 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:17:24.107413 | orchestrator | Saturday 04 April 2026 01:17:15 +0000 (0:00:00.246) 0:00:06.297 ******** 2026-04-04 01:17:24.107416 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107420 | orchestrator | 2026-04-04 01:17:24.107424 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:17:24.107428 | orchestrator | Saturday 04 April 2026 01:17:15 +0000 (0:00:00.259) 0:00:06.557 ******** 2026-04-04 01:17:24.107434 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107439 | orchestrator | 2026-04-04 01:17:24.107445 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107451 | orchestrator | Saturday 04 April 2026 01:17:15 +0000 (0:00:00.261) 0:00:06.818 ******** 2026-04-04 01:17:24.107457 | orchestrator | 2026-04-04 01:17:24.107462 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107471 | orchestrator | Saturday 04 April 2026 01:17:15 +0000 (0:00:00.085) 0:00:06.904 ******** 2026-04-04 01:17:24.107479 | orchestrator | 2026-04-04 01:17:24.107488 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107493 | orchestrator | Saturday 04 April 2026 01:17:15 +0000 (0:00:00.085) 0:00:06.989 ******** 2026-04-04 01:17:24.107499 | orchestrator | 2026-04-04 01:17:24.107505 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:17:24.107511 | orchestrator | Saturday 04 April 2026 01:17:16 +0000 (0:00:00.256) 0:00:07.245 ******** 2026-04-04 01:17:24.107516 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107523 | orchestrator | 2026-04-04 01:17:24.107530 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-04 01:17:24.107560 | orchestrator | Saturday 04 April 2026 01:17:16 +0000 (0:00:00.263) 0:00:07.509 ******** 2026-04-04 01:17:24.107567 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107573 | orchestrator | 2026-04-04 01:17:24.107596 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-04 01:17:24.107603 | orchestrator | Saturday 04 April 2026 01:17:16 +0000 (0:00:00.255) 0:00:07.765 ******** 2026-04-04 01:17:24.107609 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107616 | orchestrator | 2026-04-04 01:17:24.107628 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-04 01:17:24.107635 | orchestrator | Saturday 04 April 2026 01:17:16 +0000 (0:00:00.130) 0:00:07.895 ******** 2026-04-04 01:17:24.107641 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:17:24.107647 | orchestrator | 2026-04-04 01:17:24.107653 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-04 01:17:24.107659 | orchestrator | Saturday 04 April 2026 01:17:18 +0000 (0:00:01.756) 0:00:09.652 ******** 2026-04-04 01:17:24.107665 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107671 | orchestrator | 2026-04-04 01:17:24.107677 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-04 01:17:24.107684 | orchestrator | Saturday 04 April 2026 01:17:18 +0000 (0:00:00.248) 0:00:09.901 ******** 2026-04-04 01:17:24.107689 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107695 | orchestrator | 2026-04-04 01:17:24.107718 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-04 01:17:24.107724 | orchestrator | Saturday 04 April 2026 01:17:19 +0000 (0:00:00.308) 0:00:10.209 ******** 2026-04-04 01:17:24.107730 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107736 | orchestrator | 2026-04-04 01:17:24.107742 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-04 01:17:24.107748 | orchestrator | Saturday 04 April 2026 01:17:19 +0000 (0:00:00.138) 0:00:10.347 ******** 2026-04-04 01:17:24.107762 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:17:24.107769 | orchestrator | 2026-04-04 01:17:24.107776 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:17:24.107783 | orchestrator | Saturday 04 April 2026 01:17:19 +0000 (0:00:00.161) 0:00:10.509 ******** 2026-04-04 01:17:24.107789 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107795 | orchestrator | 2026-04-04 01:17:24.107801 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:17:24.107807 | orchestrator | Saturday 04 April 2026 01:17:19 +0000 (0:00:00.253) 0:00:10.762 ******** 2026-04-04 01:17:24.107813 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:17:24.107820 | orchestrator | 2026-04-04 01:17:24.107826 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:17:24.107832 | orchestrator | Saturday 04 April 2026 01:17:19 +0000 (0:00:00.241) 0:00:11.003 ******** 2026-04-04 01:17:24.107838 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107844 | orchestrator | 2026-04-04 01:17:24.107850 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:17:24.107856 | orchestrator | Saturday 04 April 2026 01:17:21 +0000 (0:00:01.728) 0:00:12.732 ******** 2026-04-04 01:17:24.107862 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107867 | orchestrator | 2026-04-04 01:17:24.107873 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:17:24.107878 | orchestrator | Saturday 04 April 2026 01:17:21 +0000 (0:00:00.267) 0:00:13.000 ******** 2026-04-04 01:17:24.107884 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107890 | orchestrator | 2026-04-04 01:17:24.107896 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107902 | orchestrator | Saturday 04 April 2026 01:17:22 +0000 (0:00:00.276) 0:00:13.276 ******** 2026-04-04 01:17:24.107907 | orchestrator | 2026-04-04 01:17:24.107913 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107919 | orchestrator | Saturday 04 April 2026 01:17:22 +0000 (0:00:00.073) 0:00:13.350 ******** 2026-04-04 01:17:24.107925 | orchestrator | 2026-04-04 01:17:24.107932 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:24.107937 | orchestrator | Saturday 04 April 2026 01:17:22 +0000 (0:00:00.087) 0:00:13.437 ******** 2026-04-04 01:17:24.107944 | orchestrator | 2026-04-04 01:17:24.107949 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:17:24.107954 | orchestrator | Saturday 04 April 2026 01:17:22 +0000 (0:00:00.072) 0:00:13.509 ******** 2026-04-04 01:17:24.107959 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:24.107965 | orchestrator | 2026-04-04 01:17:24.107970 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:17:24.107976 | orchestrator | Saturday 04 April 2026 01:17:23 +0000 (0:00:01.316) 0:00:14.826 ******** 2026-04-04 01:17:24.107982 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:17:24.107987 | orchestrator |  "msg": [ 2026-04-04 01:17:24.107993 | orchestrator |  "Validator run completed.", 2026-04-04 01:17:24.107999 | orchestrator |  "You can find the report file here:", 2026-04-04 01:17:24.108005 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-04T01:17:10+00:00-report.json", 2026-04-04 01:17:24.108012 | orchestrator |  "on the following host:", 2026-04-04 01:17:24.108018 | orchestrator |  "testbed-manager" 2026-04-04 01:17:24.108025 | orchestrator |  ] 2026-04-04 01:17:24.108031 | orchestrator | } 2026-04-04 01:17:24.108036 | orchestrator | 2026-04-04 01:17:24.108042 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:17:24.108049 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:17:24.108072 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:17:24.108089 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:17:24.448610 | orchestrator | 2026-04-04 01:17:24.448681 | orchestrator | 2026-04-04 01:17:24.448695 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:17:24.448701 | orchestrator | Saturday 04 April 2026 01:17:24 +0000 (0:00:00.381) 0:00:15.207 ******** 2026-04-04 01:17:24.448706 | orchestrator | =============================================================================== 2026-04-04 01:17:24.448717 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.76s 2026-04-04 01:17:24.448721 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2026-04-04 01:17:24.448725 | orchestrator | Get container info ------------------------------------------------------ 1.69s 2026-04-04 01:17:24.448729 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2026-04-04 01:17:24.448733 | orchestrator | Get timestamp for report file ------------------------------------------- 1.00s 2026-04-04 01:17:24.448737 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-04-04 01:17:24.448741 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.51s 2026-04-04 01:17:24.448748 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-04-04 01:17:24.448754 | orchestrator | Print report file information ------------------------------------------- 0.38s 2026-04-04 01:17:24.448759 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-04 01:17:24.448769 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-04-04 01:17:24.448776 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-04-04 01:17:24.448781 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-04 01:17:24.448788 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2026-04-04 01:17:24.448793 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-04-04 01:17:24.448821 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-04 01:17:24.448828 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-04-04 01:17:24.448834 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-04-04 01:17:24.448840 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-04-04 01:17:24.448846 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-04 01:17:24.662130 | orchestrator | + osism validate ceph-osds 2026-04-04 01:17:43.145089 | orchestrator | 2026-04-04 01:17:43.145162 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-04 01:17:43.145169 | orchestrator | 2026-04-04 01:17:43.145174 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:17:43.145179 | orchestrator | Saturday 04 April 2026 01:17:39 +0000 (0:00:00.454) 0:00:00.454 ******** 2026-04-04 01:17:43.145183 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:43.145187 | orchestrator | 2026-04-04 01:17:43.145191 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 01:17:43.145195 | orchestrator | Saturday 04 April 2026 01:17:40 +0000 (0:00:00.963) 0:00:01.418 ******** 2026-04-04 01:17:43.145199 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:43.145203 | orchestrator | 2026-04-04 01:17:43.145207 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:17:43.145211 | orchestrator | Saturday 04 April 2026 01:17:40 +0000 (0:00:00.212) 0:00:01.631 ******** 2026-04-04 01:17:43.145230 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:17:43.145235 | orchestrator | 2026-04-04 01:17:43.145238 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:17:43.145242 | orchestrator | Saturday 04 April 2026 01:17:41 +0000 (0:00:00.632) 0:00:02.263 ******** 2026-04-04 01:17:43.145246 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:43.145250 | orchestrator | 2026-04-04 01:17:43.145254 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-04 01:17:43.145258 | orchestrator | Saturday 04 April 2026 01:17:41 +0000 (0:00:00.106) 0:00:02.369 ******** 2026-04-04 01:17:43.145262 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:43.145266 | orchestrator | 2026-04-04 01:17:43.145270 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-04 01:17:43.145273 | orchestrator | Saturday 04 April 2026 01:17:41 +0000 (0:00:00.119) 0:00:02.489 ******** 2026-04-04 01:17:43.145277 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:43.145281 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:43.145284 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:43.145288 | orchestrator | 2026-04-04 01:17:43.145292 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-04 01:17:43.145295 | orchestrator | Saturday 04 April 2026 01:17:41 +0000 (0:00:00.411) 0:00:02.900 ******** 2026-04-04 01:17:43.145299 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:43.145303 | orchestrator | 2026-04-04 01:17:43.145307 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-04 01:17:43.145310 | orchestrator | Saturday 04 April 2026 01:17:41 +0000 (0:00:00.143) 0:00:03.043 ******** 2026-04-04 01:17:43.145314 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:43.145318 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:43.145321 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:43.145325 | orchestrator | 2026-04-04 01:17:43.145329 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-04 01:17:43.145333 | orchestrator | Saturday 04 April 2026 01:17:42 +0000 (0:00:00.290) 0:00:03.334 ******** 2026-04-04 01:17:43.145337 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:43.145340 | orchestrator | 2026-04-04 01:17:43.145344 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:17:43.145348 | orchestrator | Saturday 04 April 2026 01:17:42 +0000 (0:00:00.326) 0:00:03.660 ******** 2026-04-04 01:17:43.145351 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:43.145355 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:43.145359 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:43.145363 | orchestrator | 2026-04-04 01:17:43.145376 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-04 01:17:43.145380 | orchestrator | Saturday 04 April 2026 01:17:42 +0000 (0:00:00.294) 0:00:03.955 ******** 2026-04-04 01:17:43.145386 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eea48a0647ead5c28d823e3015fa83ef83c96c05b75a820e5d032fbd66d53441', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-04 01:17:43.145392 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f167d422cf064d458df3e371da1cfb67518b08475f6a283c52c2959cb84b859', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:17:43.145396 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4c7b96e69d58d459b88bd9dfd9cd6374a826c904fcf13053df934749b73a0027', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:17:43.145401 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea10a9f1fea5f86b242bde3db2d8027148fbb5af165dc13e679f03e66e6c217e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-04 01:17:43.145414 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e01edac5c3aba3a381dacadcbfa707cd7a0045658b0759216bac9b6e113d2fa', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:17:43.145428 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7009d1a0d630b28f7fd087d982df49cdb2260c1518a5a95a3ecccbe4f0b2d690', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.145432 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c6b1837f40d28c345d884c0a24b88f6f66f9493610a23dc3a3644616dcabd3ed', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.145438 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11a086774884701f71c1ffddd847e4cdc89d6941b6d1d757cc5a0512e1fcb5cf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:17:43.145442 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4ea59ce1b39a6397016463c774b00832f889e3dd75951bcf0be9efed61a3e9cf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:17:43.145446 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13b8a2560e2bf1e2054d8bb27b1164734cfe558059713a2eb0fae274acc50009', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-04 01:17:43.145450 | orchestrator | ok: [testbed-node-3] => (item={'id': '0b03ccbc33332d8d3930dcc09043b22c9497c3d55f12aacfdc0b285f7e056a9e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.145454 | orchestrator | ok: [testbed-node-3] => (item={'id': '42f74cbfd282e507e4f7af760edcefcfc3896f7dd336d9bd1d8d4eec31ce1b1c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.145458 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8aa619ea35aab905d6dc49fb1dca2e686f91219387d7e83d030e332b005bf543', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-04 01:17:43.145462 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f97e805bbb27572d84dce14a76f51f82fc68ab4cca0f9efc748edc7e7de95d9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:17:43.145469 | orchestrator | skipping: [testbed-node-3] => (item={'id': '411ba71daf28d28cb01ac3f093b97f4f68dc7514642c3c1e49a437940c3e3dc9', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:17:43.145473 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a4d7d22c96731f4aa2877de8c0d6733ba2b79c2fa328196f39cb43029cbffc2', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.145476 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d0faa61568071996349b293ded6b11682f44e5273ee53ac0bb0059912eb6ed8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.145480 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65b0e9106e055d38f259f1bcfe3b94edc0d3a23cbe0b6e1a81cf7a003f53e495', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-04 01:17:43.145487 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2850406d42ef3ba655718c8f2cf75f8522e48531f5afa58605610f829b844b29', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-04 01:17:43.145492 | orchestrator | skipping: [testbed-node-4] => (item={'id': '141d8790ffbb7c5ce4a0573ce37418fee196a79dacea4f64d7570a68fda05d91', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-04 01:17:43.145495 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e221e3bbbeb96c1f9a5bd594b0f44f6e424ddb3eac640b9825ba12004e8f15cd', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:17:43.145503 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02d3511e7d744ceb2dd0c19930fee4e43d1d8cec8ec33aacf7480ea8a43170d3', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-04 01:17:43.299012 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd0e6b3cada27aad66b1db6534482343613985fccd38ebafeaaf2ded324e5c9fa', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:17:43.299086 | orchestrator | skipping: [testbed-node-4] => (item={'id': '203224d3066ea23148ea0516757df35b92927d61827ab155d9a6b4da1a8ec1d7', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.299094 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ca7bf7fac12388e12ea3bb741e908588a7146ed7542a060ee2411c54f88006c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.299102 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b5f0b5be5b46564e9183aad95dbaac4c805a4c7175eef563f797e4d4f483bfb9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:17:43.299108 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6d3058929ff3ec48ae5a41b7686fb564a6fbf34893822a863cb6e488b351ed28', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:17:43.299115 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7a4802c5397f12da8b73cdde6685dc595c16ade33cd632f7e017720571561fa7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-04 01:17:43.299125 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c1f6335ce14a55eb0209299a7a5d4915fbf6d3b9d1f659f4846ff8ae9e49789d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.299134 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd9ba08e521e740b51687eaf3dc996046dd9a36ba4e07c9ebd137f23f1d9bba5b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.299141 | orchestrator | skipping: [testbed-node-4] => (item={'id': '38f5fdb3751bebf2a851ee9813df32ce4b75f63c7d0bd2ee7d1c8eedf0f10856', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-04 01:17:43.299147 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e7648b1fdf006d644c855220899c9b8347652d30f37f50c342ca860709b11c98', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:17:43.299175 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc7edc1ecad9646a4d0e686c285a6593a6c5cb737d88a2ffe955127614056e55', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:17:43.299183 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae47bdbc35b51de8849144150e984b54239eaa2c1ff7845a523550804be218f2', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.299189 | orchestrator | skipping: [testbed-node-4] => (item={'id': '73d095d6a6782c5c1f6d0cd21d8dbf75e767097420f0b53f12e846f4b863aba8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.299195 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b87e5dcf77b81eedae99812958c96eab51473a78e50e36c83b6678eb7a4100c', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-04 01:17:43.299201 | orchestrator | skipping: [testbed-node-5] => (item={'id': '522c40f8ae04f71fa25458ba4ec8d4b017dc2687a5f5ae5f804a26b8f7da624c', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-04 01:17:43.299237 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc17c3a7307e2c261956f3729fc5759739fe07fa97e47d6516236b30895d3500', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-04 01:17:43.299246 | orchestrator | skipping: [testbed-node-5] => (item={'id': '825b0c127ad7f5c29eac63eba795de4318874830b110c9c9d608a4777946b8a1', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:17:43.299252 | orchestrator | skipping: [testbed-node-5] => (item={'id': '788868442eda5fe1fe48e06cc75cab3e2864254e1689c02bb8676e995e85d751', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-04 01:17:43.299259 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd87fad90b1af149d599af3013178297df7d4aa61fc930a82af5817a2c2bbcd94', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:17:43.299265 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28d4bf3227bfc414ec29978a495d3db1ebf80eebc6e0d0971a137d228eadefc5', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.299272 | orchestrator | skipping: [testbed-node-5] => (item={'id': '02f564fdb11b8b8bdd6ec9df6a7f76db2fd60f1a412a81bd0ce8b798fd539305', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-04 01:17:43.299278 | orchestrator | skipping: [testbed-node-5] => (item={'id': '69c877911856556cb8460b32443bb09d5e28f04a153ce2069e9f438f214ade0b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:17:43.299286 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd157cb5b442881837802555abb6d0a64c7a2d08a08cd4cac1537477e7bb22af', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:17:43.299290 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de477def24af0d87d4aa3d9bbf26d3c88d34c47da50ad95719603715e9957ff2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-04 01:17:43.299301 | orchestrator | ok: [testbed-node-5] => (item={'id': '314c5e78551b1920d9448f1895e33df9c152131045f98422727c165c2297eb67', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.299305 | orchestrator | ok: [testbed-node-5] => (item={'id': '975f3228b12b4e5354c44085ec272955947de2856988050f30eacb0c48200e2a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-04 01:17:43.299309 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae886fc5c5fa9adce1a3d2383093405b4000ba3b8ba24fe89812e9413aa2a5d6', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-04 01:17:43.299313 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03cb1c6612fc7ea869a1f0e8760e41e935628b3076d40e9a98b94ee71fc74fe5', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:17:43.299317 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a4a2900314d98050155a5afceffa8de4c8da6270ddc1d2adbe24130d3afb6093', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:17:43.299320 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f54d12319e4be1c618949899f9f2421818cf45af42ac8b3db1d6bd690c172adb', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.299324 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f491381e1f911b21eb72da50e5121744aac3770feb9be346fc9e875f82c16674', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:17:43.299332 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb4f295b34ebece64a1bee670dabe5d065521395ef10141396e454f682698114', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-04 01:17:56.488433 | orchestrator | 2026-04-04 01:17:56.488603 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-04 01:17:56.488618 | orchestrator | Saturday 04 April 2026 01:17:43 +0000 (0:00:00.678) 0:00:04.633 ******** 2026-04-04 01:17:56.488623 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.488628 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.488632 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.488636 | orchestrator | 2026-04-04 01:17:56.488640 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-04 01:17:56.488644 | orchestrator | Saturday 04 April 2026 01:17:43 +0000 (0:00:00.301) 0:00:04.935 ******** 2026-04-04 01:17:56.488648 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488653 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.488657 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.488660 | orchestrator | 2026-04-04 01:17:56.488664 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-04 01:17:56.488668 | orchestrator | Saturday 04 April 2026 01:17:44 +0000 (0:00:00.289) 0:00:05.224 ******** 2026-04-04 01:17:56.488672 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.488676 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.488680 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.488684 | orchestrator | 2026-04-04 01:17:56.488688 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:17:56.488691 | orchestrator | Saturday 04 April 2026 01:17:44 +0000 (0:00:00.332) 0:00:05.556 ******** 2026-04-04 01:17:56.488695 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.488699 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.488703 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.488725 | orchestrator | 2026-04-04 01:17:56.488729 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-04 01:17:56.488733 | orchestrator | Saturday 04 April 2026 01:17:44 +0000 (0:00:00.453) 0:00:06.010 ******** 2026-04-04 01:17:56.488737 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-04 01:17:56.488742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-04 01:17:56.488746 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488750 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-04 01:17:56.488754 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-04 01:17:56.488758 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.488761 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-04 01:17:56.488765 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-04 01:17:56.488769 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.488772 | orchestrator | 2026-04-04 01:17:56.488776 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-04 01:17:56.488791 | orchestrator | Saturday 04 April 2026 01:17:45 +0000 (0:00:00.310) 0:00:06.321 ******** 2026-04-04 01:17:56.488796 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.488799 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.488803 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.488807 | orchestrator | 2026-04-04 01:17:56.488811 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-04 01:17:56.488814 | orchestrator | Saturday 04 April 2026 01:17:45 +0000 (0:00:00.310) 0:00:06.631 ******** 2026-04-04 01:17:56.488818 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488822 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.488826 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.488829 | orchestrator | 2026-04-04 01:17:56.488833 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-04 01:17:56.488837 | orchestrator | Saturday 04 April 2026 01:17:45 +0000 (0:00:00.279) 0:00:06.911 ******** 2026-04-04 01:17:56.488841 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488853 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.488857 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.488866 | orchestrator | 2026-04-04 01:17:56.488870 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-04 01:17:56.488874 | orchestrator | Saturday 04 April 2026 01:17:46 +0000 (0:00:00.475) 0:00:07.387 ******** 2026-04-04 01:17:56.488878 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.488882 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.488885 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.488889 | orchestrator | 2026-04-04 01:17:56.488893 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:17:56.488897 | orchestrator | Saturday 04 April 2026 01:17:46 +0000 (0:00:00.304) 0:00:07.692 ******** 2026-04-04 01:17:56.488901 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488904 | orchestrator | 2026-04-04 01:17:56.488908 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:17:56.488912 | orchestrator | Saturday 04 April 2026 01:17:46 +0000 (0:00:00.269) 0:00:07.961 ******** 2026-04-04 01:17:56.488916 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488919 | orchestrator | 2026-04-04 01:17:56.488923 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:17:56.488927 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:00.246) 0:00:08.208 ******** 2026-04-04 01:17:56.488931 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.488935 | orchestrator | 2026-04-04 01:17:56.488938 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:56.488946 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:00.236) 0:00:08.444 ******** 2026-04-04 01:17:56.488950 | orchestrator | 2026-04-04 01:17:56.488954 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:56.488958 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:00.067) 0:00:08.511 ******** 2026-04-04 01:17:56.488961 | orchestrator | 2026-04-04 01:17:56.488965 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:17:56.488980 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:00.064) 0:00:08.576 ******** 2026-04-04 01:17:56.488984 | orchestrator | 2026-04-04 01:17:56.488988 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:17:56.488992 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:00.066) 0:00:08.643 ******** 2026-04-04 01:17:56.488996 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489001 | orchestrator | 2026-04-04 01:17:56.489005 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-04 01:17:56.489010 | orchestrator | Saturday 04 April 2026 01:17:48 +0000 (0:00:00.613) 0:00:09.257 ******** 2026-04-04 01:17:56.489014 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489018 | orchestrator | 2026-04-04 01:17:56.489023 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:17:56.489027 | orchestrator | Saturday 04 April 2026 01:17:48 +0000 (0:00:00.255) 0:00:09.513 ******** 2026-04-04 01:17:56.489032 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489036 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.489041 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.489045 | orchestrator | 2026-04-04 01:17:56.489049 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-04 01:17:56.489054 | orchestrator | Saturday 04 April 2026 01:17:48 +0000 (0:00:00.302) 0:00:09.815 ******** 2026-04-04 01:17:56.489059 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489063 | orchestrator | 2026-04-04 01:17:56.489067 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-04 01:17:56.489072 | orchestrator | Saturday 04 April 2026 01:17:49 +0000 (0:00:00.245) 0:00:10.060 ******** 2026-04-04 01:17:56.489076 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:17:56.489081 | orchestrator | 2026-04-04 01:17:56.489085 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-04 01:17:56.489090 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:02.159) 0:00:12.220 ******** 2026-04-04 01:17:56.489094 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489099 | orchestrator | 2026-04-04 01:17:56.489103 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-04 01:17:56.489109 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:00.135) 0:00:12.356 ******** 2026-04-04 01:17:56.489115 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489121 | orchestrator | 2026-04-04 01:17:56.489127 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-04 01:17:56.489133 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:00.302) 0:00:12.659 ******** 2026-04-04 01:17:56.489140 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489147 | orchestrator | 2026-04-04 01:17:56.489152 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-04 01:17:56.489157 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:00.135) 0:00:12.794 ******** 2026-04-04 01:17:56.489161 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489166 | orchestrator | 2026-04-04 01:17:56.489170 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:17:56.489175 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:00.129) 0:00:12.923 ******** 2026-04-04 01:17:56.489179 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489184 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.489189 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.489193 | orchestrator | 2026-04-04 01:17:56.489198 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-04 01:17:56.489206 | orchestrator | Saturday 04 April 2026 01:17:52 +0000 (0:00:00.454) 0:00:13.378 ******** 2026-04-04 01:17:56.489211 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:17:56.489218 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:17:56.489224 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:17:56.489230 | orchestrator | 2026-04-04 01:17:56.489236 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-04 01:17:56.489242 | orchestrator | Saturday 04 April 2026 01:17:54 +0000 (0:00:01.804) 0:00:15.182 ******** 2026-04-04 01:17:56.489248 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489255 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.489261 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.489265 | orchestrator | 2026-04-04 01:17:56.489270 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-04 01:17:56.489274 | orchestrator | Saturday 04 April 2026 01:17:54 +0000 (0:00:00.306) 0:00:15.489 ******** 2026-04-04 01:17:56.489280 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489286 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.489294 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.489300 | orchestrator | 2026-04-04 01:17:56.489307 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-04 01:17:56.489314 | orchestrator | Saturday 04 April 2026 01:17:54 +0000 (0:00:00.476) 0:00:15.966 ******** 2026-04-04 01:17:56.489319 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489324 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.489328 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.489333 | orchestrator | 2026-04-04 01:17:56.489337 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-04 01:17:56.489341 | orchestrator | Saturday 04 April 2026 01:17:55 +0000 (0:00:00.523) 0:00:16.490 ******** 2026-04-04 01:17:56.489346 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:17:56.489351 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:17:56.489355 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:17:56.489359 | orchestrator | 2026-04-04 01:17:56.489364 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-04 01:17:56.489368 | orchestrator | Saturday 04 April 2026 01:17:55 +0000 (0:00:00.327) 0:00:16.818 ******** 2026-04-04 01:17:56.489372 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489376 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.489379 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.489383 | orchestrator | 2026-04-04 01:17:56.489387 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-04 01:17:56.489391 | orchestrator | Saturday 04 April 2026 01:17:56 +0000 (0:00:00.292) 0:00:17.110 ******** 2026-04-04 01:17:56.489394 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:17:56.489398 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:17:56.489402 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:17:56.489406 | orchestrator | 2026-04-04 01:17:56.489412 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:18:03.754495 | orchestrator | Saturday 04 April 2026 01:17:56 +0000 (0:00:00.426) 0:00:17.537 ******** 2026-04-04 01:18:03.754605 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:18:03.754613 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:18:03.754617 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:18:03.754621 | orchestrator | 2026-04-04 01:18:03.754626 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-04 01:18:03.754631 | orchestrator | Saturday 04 April 2026 01:17:56 +0000 (0:00:00.487) 0:00:18.024 ******** 2026-04-04 01:18:03.754635 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:18:03.754638 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:18:03.754642 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:18:03.754646 | orchestrator | 2026-04-04 01:18:03.754650 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-04 01:18:03.754653 | orchestrator | Saturday 04 April 2026 01:17:57 +0000 (0:00:00.499) 0:00:18.524 ******** 2026-04-04 01:18:03.754676 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:18:03.754680 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:18:03.754684 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:18:03.754688 | orchestrator | 2026-04-04 01:18:03.754692 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-04 01:18:03.754696 | orchestrator | Saturday 04 April 2026 01:17:57 +0000 (0:00:00.294) 0:00:18.819 ******** 2026-04-04 01:18:03.754700 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:18:03.754705 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:18:03.754709 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:18:03.754712 | orchestrator | 2026-04-04 01:18:03.754716 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-04 01:18:03.754720 | orchestrator | Saturday 04 April 2026 01:17:58 +0000 (0:00:00.462) 0:00:19.282 ******** 2026-04-04 01:18:03.754724 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:18:03.754727 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:18:03.754731 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:18:03.754735 | orchestrator | 2026-04-04 01:18:03.754738 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:18:03.754742 | orchestrator | Saturday 04 April 2026 01:17:58 +0000 (0:00:00.332) 0:00:19.614 ******** 2026-04-04 01:18:03.754747 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:18:03.754751 | orchestrator | 2026-04-04 01:18:03.754757 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:18:03.754763 | orchestrator | Saturday 04 April 2026 01:17:58 +0000 (0:00:00.267) 0:00:19.882 ******** 2026-04-04 01:18:03.754769 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:18:03.754774 | orchestrator | 2026-04-04 01:18:03.754779 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:18:03.754829 | orchestrator | Saturday 04 April 2026 01:17:59 +0000 (0:00:00.252) 0:00:20.134 ******** 2026-04-04 01:18:03.754836 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:18:03.754842 | orchestrator | 2026-04-04 01:18:03.754848 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:18:03.754857 | orchestrator | Saturday 04 April 2026 01:18:00 +0000 (0:00:01.799) 0:00:21.934 ******** 2026-04-04 01:18:03.754864 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:18:03.754870 | orchestrator | 2026-04-04 01:18:03.754876 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:18:03.754882 | orchestrator | Saturday 04 April 2026 01:18:01 +0000 (0:00:00.254) 0:00:22.189 ******** 2026-04-04 01:18:03.754888 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:18:03.754895 | orchestrator | 2026-04-04 01:18:03.754901 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:18:03.754907 | orchestrator | Saturday 04 April 2026 01:18:01 +0000 (0:00:00.263) 0:00:22.453 ******** 2026-04-04 01:18:03.754913 | orchestrator | 2026-04-04 01:18:03.754918 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:18:03.754924 | orchestrator | Saturday 04 April 2026 01:18:01 +0000 (0:00:00.068) 0:00:22.521 ******** 2026-04-04 01:18:03.754930 | orchestrator | 2026-04-04 01:18:03.754936 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:18:03.754942 | orchestrator | Saturday 04 April 2026 01:18:01 +0000 (0:00:00.263) 0:00:22.785 ******** 2026-04-04 01:18:03.754948 | orchestrator | 2026-04-04 01:18:03.754953 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:18:03.754959 | orchestrator | Saturday 04 April 2026 01:18:01 +0000 (0:00:00.068) 0:00:22.854 ******** 2026-04-04 01:18:03.754964 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:18:03.754970 | orchestrator | 2026-04-04 01:18:03.754976 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:18:03.754982 | orchestrator | Saturday 04 April 2026 01:18:03 +0000 (0:00:01.285) 0:00:24.139 ******** 2026-04-04 01:18:03.754995 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:18:03.755001 | orchestrator |  "msg": [ 2026-04-04 01:18:03.755007 | orchestrator |  "Validator run completed.", 2026-04-04 01:18:03.755014 | orchestrator |  "You can find the report file here:", 2026-04-04 01:18:03.755020 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-04T01:17:40+00:00-report.json", 2026-04-04 01:18:03.755027 | orchestrator |  "on the following host:", 2026-04-04 01:18:03.755034 | orchestrator |  "testbed-manager" 2026-04-04 01:18:03.755040 | orchestrator |  ] 2026-04-04 01:18:03.755046 | orchestrator | } 2026-04-04 01:18:03.755052 | orchestrator | 2026-04-04 01:18:03.755059 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:18:03.755067 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 01:18:03.755075 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:18:03.755133 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:18:03.755142 | orchestrator | 2026-04-04 01:18:03.755148 | orchestrator | 2026-04-04 01:18:03.755154 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:18:03.755160 | orchestrator | Saturday 04 April 2026 01:18:03 +0000 (0:00:00.393) 0:00:24.533 ******** 2026-04-04 01:18:03.755166 | orchestrator | =============================================================================== 2026-04-04 01:18:03.755173 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.16s 2026-04-04 01:18:03.755179 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.80s 2026-04-04 01:18:03.755185 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-04-04 01:18:03.755192 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-04-04 01:18:03.755199 | orchestrator | Get timestamp for report file ------------------------------------------- 0.96s 2026-04-04 01:18:03.755206 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.68s 2026-04-04 01:18:03.755213 | orchestrator | Create report output directory ------------------------------------------ 0.63s 2026-04-04 01:18:03.755220 | orchestrator | Print report file information ------------------------------------------- 0.61s 2026-04-04 01:18:03.755226 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.52s 2026-04-04 01:18:03.755231 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-04-04 01:18:03.755235 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-04 01:18:03.755239 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2026-04-04 01:18:03.755244 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2026-04-04 01:18:03.755248 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.46s 2026-04-04 01:18:03.755253 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-04 01:18:03.755259 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-04 01:18:03.755265 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.43s 2026-04-04 01:18:03.755271 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.41s 2026-04-04 01:18:03.755277 | orchestrator | Flush handlers ---------------------------------------------------------- 0.40s 2026-04-04 01:18:03.755284 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-04-04 01:18:03.954885 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-04 01:18:03.963036 | orchestrator | + set -e 2026-04-04 01:18:03.963286 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:18:03.963322 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:18:03.963380 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:18:03.963386 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:18:03.963390 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:18:03.963394 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:18:03.963399 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:18:03.963403 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:18:03.963407 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:18:03.963411 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 01:18:03.963415 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 01:18:03.963419 | orchestrator | ++ export ARA=false 2026-04-04 01:18:03.963423 | orchestrator | ++ ARA=false 2026-04-04 01:18:03.963427 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:18:03.963431 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:18:03.963435 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:18:03.963439 | orchestrator | ++ TEMPEST=true 2026-04-04 01:18:03.963443 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:18:03.963473 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:18:03.963486 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:18:03.963490 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:18:03.963494 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:18:03.963498 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:18:03.963501 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:18:03.963505 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:18:03.963509 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:18:03.963563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:18:03.963568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:18:03.963572 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:18:03.963576 | orchestrator | + source /etc/os-release 2026-04-04 01:18:03.963579 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-04 01:18:03.963583 | orchestrator | ++ NAME=Ubuntu 2026-04-04 01:18:03.963587 | orchestrator | ++ VERSION_ID=24.04 2026-04-04 01:18:03.963591 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-04 01:18:03.963595 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-04 01:18:03.963599 | orchestrator | ++ ID=ubuntu 2026-04-04 01:18:03.963602 | orchestrator | ++ ID_LIKE=debian 2026-04-04 01:18:03.963606 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-04 01:18:03.963610 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-04 01:18:03.963614 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-04 01:18:03.963618 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-04 01:18:03.963622 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-04 01:18:03.963626 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-04 01:18:03.963630 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-04 01:18:03.963635 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-04 01:18:03.963639 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-04 01:18:03.984628 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-04 01:18:27.069912 | orchestrator | 2026-04-04 01:18:27.069984 | orchestrator | # Status of Elasticsearch 2026-04-04 01:18:27.069991 | orchestrator | 2026-04-04 01:18:27.069996 | orchestrator | + pushd /opt/configuration/contrib 2026-04-04 01:18:27.070001 | orchestrator | + echo 2026-04-04 01:18:27.070005 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-04 01:18:27.070009 | orchestrator | + echo 2026-04-04 01:18:27.070056 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-04 01:18:27.231014 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-04 01:18:27.231081 | orchestrator | 2026-04-04 01:18:27.231088 | orchestrator | # Status of MariaDB 2026-04-04 01:18:27.231093 | orchestrator | 2026-04-04 01:18:27.231097 | orchestrator | + echo 2026-04-04 01:18:27.231101 | orchestrator | + echo '# Status of MariaDB' 2026-04-04 01:18:27.231105 | orchestrator | + echo 2026-04-04 01:18:27.231272 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 01:18:27.268930 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 01:18:27.269009 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:18:27.269023 | orchestrator | + osism status database 2026-04-04 01:18:28.890556 | orchestrator | 2026-04-04 01:18:28 | ERROR  | Unable to get ansible vault password 2026-04-04 01:18:28.890647 | orchestrator | 2026-04-04 01:18:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:18:28.890658 | orchestrator | 2026-04-04 01:18:28 | ERROR  | Dropping encrypted entries 2026-04-04 01:18:28.923847 | orchestrator | 2026-04-04 01:18:28 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-04 01:18:28.935360 | orchestrator | 2026-04-04 01:18:28 | INFO  | Cluster Status: Primary 2026-04-04 01:18:28.935436 | orchestrator | 2026-04-04 01:18:28 | INFO  | Connected: ON 2026-04-04 01:18:28.935442 | orchestrator | 2026-04-04 01:18:28 | INFO  | Ready: ON 2026-04-04 01:18:28.935447 | orchestrator | 2026-04-04 01:18:28 | INFO  | Cluster Size: 3 2026-04-04 01:18:28.935451 | orchestrator | 2026-04-04 01:18:28 | INFO  | Local State: Synced 2026-04-04 01:18:28.935456 | orchestrator | 2026-04-04 01:18:28 | INFO  | Cluster State UUID: 05db8372-2fc1-11f1-87f3-7fba8c557665 2026-04-04 01:18:28.935462 | orchestrator | 2026-04-04 01:18:28 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-04 01:18:28.935467 | orchestrator | 2026-04-04 01:18:28 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-04 01:18:28.935471 | orchestrator | 2026-04-04 01:18:28 | INFO  | Local Node UUID: 39bf95ae-2fc1-11f1-95d4-c6a014c66cdb 2026-04-04 01:18:28.935476 | orchestrator | 2026-04-04 01:18:28 | INFO  | Flow Control Paused: 0.00% 2026-04-04 01:18:28.935482 | orchestrator | 2026-04-04 01:18:28 | INFO  | Recv Queue Avg: 0.021978 2026-04-04 01:18:28.935488 | orchestrator | 2026-04-04 01:18:28 | INFO  | Send Queue Avg: 0.00090212 2026-04-04 01:18:28.935495 | orchestrator | 2026-04-04 01:18:28 | INFO  | Transactions: 4394 local commits, 6595 replicated, 91 received 2026-04-04 01:18:28.935522 | orchestrator | 2026-04-04 01:18:28 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-04 01:18:28.935549 | orchestrator | 2026-04-04 01:18:28 | INFO  | MariaDB Uptime: 21 minutes, 12 seconds 2026-04-04 01:18:28.935554 | orchestrator | 2026-04-04 01:18:28 | INFO  | Threads: 150 connected, 1 running 2026-04-04 01:18:28.935557 | orchestrator | 2026-04-04 01:18:28 | INFO  | Queries: 182857 total, 0 slow 2026-04-04 01:18:28.935561 | orchestrator | 2026-04-04 01:18:28 | INFO  | Aborted Connects: 148 2026-04-04 01:18:28.935565 | orchestrator | 2026-04-04 01:18:28 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-04 01:18:29.200967 | orchestrator | 2026-04-04 01:18:29.201041 | orchestrator | # Status of Prometheus 2026-04-04 01:18:29.201052 | orchestrator | 2026-04-04 01:18:29.201059 | orchestrator | + echo 2026-04-04 01:18:29.201065 | orchestrator | + echo '# Status of Prometheus' 2026-04-04 01:18:29.201071 | orchestrator | + echo 2026-04-04 01:18:29.201078 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-04 01:18:29.253146 | orchestrator | Unauthorized 2026-04-04 01:18:29.257041 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-04 01:18:29.317242 | orchestrator | Unauthorized 2026-04-04 01:18:29.320031 | orchestrator | 2026-04-04 01:18:29.320083 | orchestrator | # Status of RabbitMQ 2026-04-04 01:18:29.320089 | orchestrator | 2026-04-04 01:18:29.320093 | orchestrator | + echo 2026-04-04 01:18:29.320098 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-04 01:18:29.320103 | orchestrator | + echo 2026-04-04 01:18:29.321191 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 01:18:29.376363 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 01:18:29.376439 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:18:29.376447 | orchestrator | + osism status messaging 2026-04-04 01:18:36.467970 | orchestrator | 2026-04-04 01:18:36 | ERROR  | Unable to get ansible vault password 2026-04-04 01:18:36.468063 | orchestrator | 2026-04-04 01:18:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:18:36.468073 | orchestrator | 2026-04-04 01:18:36 | ERROR  | Dropping encrypted entries 2026-04-04 01:18:36.500985 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-04 01:18:36.550945 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.8 2026-04-04 01:18:36.551031 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-04-04 01:18:36.551040 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-04 01:18:36.551047 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-04 01:18:36.551055 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.551063 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.551070 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-04 01:18:36.551076 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:18:36.551082 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Messages: 234 total, 233 ready, 1 unacked 2026-04-04 01:18:36.551089 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Message Rates: 6.8/s publish, 7.0/s deliver 2026-04-04 01:18:36.551095 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Disk Free: 58.2 GB (limit: 0.0 GB) 2026-04-04 01:18:36.551102 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-04 01:18:36.551108 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] File Descriptors: 116/1024 2026-04-04 01:18:36.551114 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-0] Sockets: 0/0 2026-04-04 01:18:36.551120 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-04 01:18:36.599620 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.8 2026-04-04 01:18:36.599707 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-04-04 01:18:36.599716 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-04 01:18:36.599723 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-04 01:18:36.599731 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.599739 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.599820 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-04 01:18:36.599828 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:18:36.599835 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Messages: 234 total, 233 ready, 1 unacked 2026-04-04 01:18:36.599842 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Message Rates: 6.8/s publish, 7.0/s deliver 2026-04-04 01:18:36.599872 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-04 01:18:36.599878 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-04 01:18:36.600091 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] File Descriptors: 93/1024 2026-04-04 01:18:36.600109 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-1] Sockets: 0/0 2026-04-04 01:18:36.600284 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-04 01:18:36.647991 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.8 2026-04-04 01:18:36.648072 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-04-04 01:18:36.648175 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-04 01:18:36.648187 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-04 01:18:36.648195 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.648204 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:18:36.648210 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-04 01:18:36.648562 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:18:36.648584 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Messages: 234 total, 233 ready, 1 unacked 2026-04-04 01:18:36.648591 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Message Rates: 6.8/s publish, 7.0/s deliver 2026-04-04 01:18:36.648599 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-04 01:18:36.648606 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Memory Used: 0.16 GB (limit: 18.80 GB) 2026-04-04 01:18:36.648612 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] File Descriptors: 120/1024 2026-04-04 01:18:36.648618 | orchestrator | 2026-04-04 01:18:36 | INFO  | [testbed-node-2] Sockets: 0/0 2026-04-04 01:18:36.648859 | orchestrator | 2026-04-04 01:18:36 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-04 01:18:36.878358 | orchestrator | 2026-04-04 01:18:36.878436 | orchestrator | # Status of Redis 2026-04-04 01:18:36.878447 | orchestrator | 2026-04-04 01:18:36.878454 | orchestrator | + echo 2026-04-04 01:18:36.878461 | orchestrator | + echo '# Status of Redis' 2026-04-04 01:18:36.878469 | orchestrator | + echo 2026-04-04 01:18:36.878479 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-04 01:18:36.884637 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001629s;;;0.000000;10.000000 2026-04-04 01:18:36.884709 | orchestrator | 2026-04-04 01:18:36.884716 | orchestrator | # Create backup of MariaDB database 2026-04-04 01:18:36.884721 | orchestrator | 2026-04-04 01:18:36.884725 | orchestrator | + popd 2026-04-04 01:18:36.884729 | orchestrator | + echo 2026-04-04 01:18:36.884733 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-04 01:18:36.884738 | orchestrator | + echo 2026-04-04 01:18:36.884742 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-04 01:18:38.219403 | orchestrator | 2026-04-04 01:18:38 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-04 01:18:38.282634 | orchestrator | 2026-04-04 01:18:38 | INFO  | Task 2e9fa41e-2ab5-4c7a-be84-2552b004c42e (mariadb_backup) was prepared for execution. 2026-04-04 01:18:38.282712 | orchestrator | 2026-04-04 01:18:38 | INFO  | It takes a moment until task 2e9fa41e-2ab5-4c7a-be84-2552b004c42e (mariadb_backup) has been started and output is visible here. 2026-04-04 01:20:12.982067 | orchestrator | 2026-04-04 01:20:12.982150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:20:12.982158 | orchestrator | 2026-04-04 01:20:12.982175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:20:12.982180 | orchestrator | Saturday 04 April 2026 01:18:41 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-04-04 01:20:12.982184 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:20:12.982189 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:20:12.982194 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:20:12.982197 | orchestrator | 2026-04-04 01:20:12.982201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:20:12.982206 | orchestrator | Saturday 04 April 2026 01:18:41 +0000 (0:00:00.307) 0:00:00.542 ******** 2026-04-04 01:20:12.982210 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-04 01:20:12.982215 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-04 01:20:12.982219 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-04 01:20:12.982222 | orchestrator | 2026-04-04 01:20:12.982228 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-04 01:20:12.982232 | orchestrator | 2026-04-04 01:20:12.982236 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-04 01:20:12.982240 | orchestrator | Saturday 04 April 2026 01:18:42 +0000 (0:00:00.458) 0:00:01.000 ******** 2026-04-04 01:20:12.982244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 01:20:12.982248 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 01:20:12.982252 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 01:20:12.982256 | orchestrator | 2026-04-04 01:20:12.982260 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 01:20:12.982263 | orchestrator | Saturday 04 April 2026 01:18:42 +0000 (0:00:00.382) 0:00:01.382 ******** 2026-04-04 01:20:12.982268 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:20:12.982272 | orchestrator | 2026-04-04 01:20:12.982276 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-04 01:20:12.982280 | orchestrator | Saturday 04 April 2026 01:18:43 +0000 (0:00:00.634) 0:00:02.017 ******** 2026-04-04 01:20:12.982284 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:20:12.982287 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:20:12.982291 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:20:12.982295 | orchestrator | 2026-04-04 01:20:12.982299 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-04 01:20:12.982302 | orchestrator | Saturday 04 April 2026 01:18:46 +0000 (0:00:03.564) 0:00:05.582 ******** 2026-04-04 01:20:12.982306 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:20:12.982311 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:20:12.982315 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:20:12.982318 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-04 01:20:12.982322 | orchestrator | 2026-04-04 01:20:12.982326 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-04 01:20:12.982330 | orchestrator | skipping: no hosts matched 2026-04-04 01:20:12.982334 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-04 01:20:12.982337 | orchestrator | 2026-04-04 01:20:12.982341 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 01:20:12.982345 | orchestrator | skipping: no hosts matched 2026-04-04 01:20:12.982349 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-04 01:20:12.982368 | orchestrator | mariadb_bootstrap_restart 2026-04-04 01:20:12.982372 | orchestrator | 2026-04-04 01:20:12.982376 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-04 01:20:12.982380 | orchestrator | skipping: no hosts matched 2026-04-04 01:20:12.982384 | orchestrator | 2026-04-04 01:20:12.982387 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-04 01:20:12.982391 | orchestrator | 2026-04-04 01:20:12.982395 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-04 01:20:12.982399 | orchestrator | Saturday 04 April 2026 01:20:12 +0000 (0:01:25.470) 0:01:31.052 ******** 2026-04-04 01:20:12.982402 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:20:12.982414 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:20:12.982418 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:20:12.982422 | orchestrator | 2026-04-04 01:20:12.982431 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-04 01:20:12.982435 | orchestrator | Saturday 04 April 2026 01:20:12 +0000 (0:00:00.316) 0:01:31.369 ******** 2026-04-04 01:20:12.982439 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:20:12.982443 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:20:12.982446 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:20:12.982486 | orchestrator | 2026-04-04 01:20:12.982491 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:20:12.982496 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:20:12.982501 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:20:12.982505 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:20:12.982509 | orchestrator | 2026-04-04 01:20:12.982513 | orchestrator | 2026-04-04 01:20:12.982517 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:20:12.982523 | orchestrator | Saturday 04 April 2026 01:20:12 +0000 (0:00:00.222) 0:01:31.592 ******** 2026-04-04 01:20:12.982529 | orchestrator | =============================================================================== 2026-04-04 01:20:12.982535 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 85.47s 2026-04-04 01:20:12.982552 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.56s 2026-04-04 01:20:12.982556 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.63s 2026-04-04 01:20:12.982560 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-04 01:20:12.982564 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-04-04 01:20:12.982568 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-04-04 01:20:12.982571 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-04 01:20:12.982575 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-04-04 01:20:13.148890 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-04 01:20:13.156932 | orchestrator | + set -e 2026-04-04 01:20:13.157019 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:20:13.157031 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:20:13.157039 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:20:13.157046 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:20:13.157071 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:20:13.157078 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:20:13.158287 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:20:13.164631 | orchestrator | 2026-04-04 01:20:13.164707 | orchestrator | # OpenStack endpoints 2026-04-04 01:20:13.164715 | orchestrator | 2026-04-04 01:20:13.164722 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:20:13.164769 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:20:13.164778 | orchestrator | + export OS_CLOUD=admin 2026-04-04 01:20:13.164784 | orchestrator | + OS_CLOUD=admin 2026-04-04 01:20:13.164791 | orchestrator | + echo 2026-04-04 01:20:13.164798 | orchestrator | + echo '# OpenStack endpoints' 2026-04-04 01:20:13.164804 | orchestrator | + echo 2026-04-04 01:20:13.164810 | orchestrator | + openstack endpoint list 2026-04-04 01:20:16.241830 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:20:16.241903 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-04 01:20:16.241910 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:20:16.241915 | orchestrator | | 2052a964bb1d47bebc0e57ffb7ce790c | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-04 01:20:16.241919 | orchestrator | | 38404354b9fd4aeaa60b22392df8aaf0 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-04 01:20:16.241923 | orchestrator | | 3db5cbf2d8e9488e84ef1831a164257d | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-04 01:20:16.241927 | orchestrator | | 41265b5f0b9647e8bbcc75da33870c6f | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-04 01:20:16.241930 | orchestrator | | 47a11fbeb4644554a8d0f27daa07bf08 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-04 01:20:16.241934 | orchestrator | | 4ae4a51d47ac40d6b8e39019d1c9483d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-04 01:20:16.241948 | orchestrator | | 4dd6555c5f344e79aa60960994a154d8 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-04 01:20:16.241952 | orchestrator | | 6716d2e7a36f42ee88d86218adb8d41e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-04 01:20:16.241962 | orchestrator | | 7a48d45633144848ac55fd2bb393d3c6 | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-04-04 01:20:16.241966 | orchestrator | | 7e1d09da35bf4d18b505c8797ea2b6e1 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-04 01:20:16.241969 | orchestrator | | 82f679e9c439443aa5e8c8539009ae24 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-04 01:20:16.241973 | orchestrator | | 884dcd84c03844d6bce71005e6111dd2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-04 01:20:16.241977 | orchestrator | | 8c37bb967c0345228959574324f1df1c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-04 01:20:16.241981 | orchestrator | | 8f526f749a43436cbbee9ed1cf72f3e4 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-04 01:20:16.241984 | orchestrator | | a4ae899338fb456497964f4643b23b30 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-04 01:20:16.241988 | orchestrator | | ad6c7438713a4174b158ac7431fbca5c | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-04 01:20:16.242051 | orchestrator | | bcba48fa49e143a491c68fe32a7d71bf | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-04 01:20:16.242056 | orchestrator | | bde310b4d63e4e4db05e27379f324d6b | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-04 01:20:16.242069 | orchestrator | | c7923a825a4842d0b5cb0175aea6910f | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-04 01:20:16.242074 | orchestrator | | e73ec6e643a444dbb91b024612339f99 | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-04-04 01:20:16.242090 | orchestrator | | e8f06740f25b47b795d0ae72d05b77b7 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-04 01:20:16.242094 | orchestrator | | ed8da7ebcc1442de9b93814fabeaf335 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-04 01:20:16.242098 | orchestrator | | fc3605658cab40d783af0b965f7aed30 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-04 01:20:16.242101 | orchestrator | | fe9e8d39e2ea42c7aa1df6b684e72542 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-04 01:20:16.242105 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:20:16.476823 | orchestrator | 2026-04-04 01:20:16.476893 | orchestrator | # Cinder 2026-04-04 01:20:16.476900 | orchestrator | 2026-04-04 01:20:16.476905 | orchestrator | + echo 2026-04-04 01:20:16.476909 | orchestrator | + echo '# Cinder' 2026-04-04 01:20:16.476913 | orchestrator | + echo 2026-04-04 01:20:16.476917 | orchestrator | + openstack volume service list 2026-04-04 01:20:18.964083 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:18.964139 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-04 01:20:18.964150 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:18.964159 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-04T01:20:18.000000 | 2026-04-04 01:20:18.964167 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-04T01:20:18.000000 | 2026-04-04 01:20:18.964175 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-04T01:20:08.000000 | 2026-04-04 01:20:18.964184 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-04T01:20:18.000000 | 2026-04-04 01:20:18.964192 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-04T01:20:14.000000 | 2026-04-04 01:20:18.964199 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-04T01:20:15.000000 | 2026-04-04 01:20:18.964207 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-04T01:20:17.000000 | 2026-04-04 01:20:18.964215 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-04T01:20:09.000000 | 2026-04-04 01:20:18.964221 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-04T01:20:09.000000 | 2026-04-04 01:20:18.964226 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:19.188738 | orchestrator | 2026-04-04 01:20:19.188794 | orchestrator | # Neutron 2026-04-04 01:20:19.188814 | orchestrator | 2026-04-04 01:20:19.188819 | orchestrator | + echo 2026-04-04 01:20:19.188824 | orchestrator | + echo '# Neutron' 2026-04-04 01:20:19.188829 | orchestrator | + echo 2026-04-04 01:20:19.188833 | orchestrator | + openstack network agent list 2026-04-04 01:20:21.814524 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:20:21.814630 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-04 01:20:21.814647 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:20:21.814658 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814733 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814748 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814757 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814767 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814795 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-04 01:20:21.814806 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:20:21.814816 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:20:21.814826 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:20:21.814836 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:20:21.968234 | orchestrator | + openstack network service provider list 2026-04-04 01:20:24.210826 | orchestrator | +---------------+------+---------+ 2026-04-04 01:20:24.210920 | orchestrator | | Service Type | Name | Default | 2026-04-04 01:20:24.210930 | orchestrator | +---------------+------+---------+ 2026-04-04 01:20:24.210937 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-04 01:20:24.210943 | orchestrator | +---------------+------+---------+ 2026-04-04 01:20:24.363293 | orchestrator | 2026-04-04 01:20:24.363385 | orchestrator | # Nova 2026-04-04 01:20:24.363395 | orchestrator | 2026-04-04 01:20:24.363401 | orchestrator | + echo 2026-04-04 01:20:24.363408 | orchestrator | + echo '# Nova' 2026-04-04 01:20:24.363414 | orchestrator | + echo 2026-04-04 01:20:24.363420 | orchestrator | + openstack compute service list 2026-04-04 01:20:26.846374 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:26.846575 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-04 01:20:26.846596 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:26.846603 | orchestrator | | e56c40eb-4751-45ff-99de-44bbe486c9fd | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-04T01:20:19.000000 | 2026-04-04 01:20:26.846608 | orchestrator | | f574a72e-d602-4c63-9061-8e748975cd96 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-04T01:20:19.000000 | 2026-04-04 01:20:26.846612 | orchestrator | | 48716cc2-85b5-47ef-9781-dfbb09fba6e2 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-04T01:20:23.000000 | 2026-04-04 01:20:26.846660 | orchestrator | | a6bcc473-aa33-43f4-8871-bb856db79333 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-04T01:20:23.000000 | 2026-04-04 01:20:26.846665 | orchestrator | | a0c46076-7ab6-469b-b42f-7cbe164d336e | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-04T01:20:24.000000 | 2026-04-04 01:20:26.846669 | orchestrator | | dfe2b88c-eb3a-42cc-a539-2d444cb6d38b | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-04T01:20:25.000000 | 2026-04-04 01:20:26.846674 | orchestrator | | 43b384d8-e42e-4f64-929c-99a1b7a78bb9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-04T01:20:22.000000 | 2026-04-04 01:20:26.846677 | orchestrator | | 8de2883f-7e3e-44b8-8398-17fedbf9244e | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-04T01:20:22.000000 | 2026-04-04 01:20:26.846681 | orchestrator | | fbc6de8b-c52d-4dfb-aac3-1d5246d5175a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-04T01:20:23.000000 | 2026-04-04 01:20:26.846685 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:20:26.990665 | orchestrator | + openstack hypervisor list 2026-04-04 01:20:29.904295 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:20:29.904377 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-04 01:20:29.904384 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:20:29.904389 | orchestrator | | 587f8386-fbed-4585-ad12-27e7f8f66f10 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-04 01:20:29.904394 | orchestrator | | 9537a17a-902a-4f6b-9d94-6cf40bc747b9 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-04 01:20:29.904398 | orchestrator | | 48f8bd12-aafd-4bf0-9dc8-d5492a488c7a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-04 01:20:29.904403 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:20:30.182967 | orchestrator | 2026-04-04 01:20:30.183036 | orchestrator | # Run OpenStack test play 2026-04-04 01:20:30.183044 | orchestrator | 2026-04-04 01:20:30.183049 | orchestrator | + echo 2026-04-04 01:20:30.183053 | orchestrator | + echo '# Run OpenStack test play' 2026-04-04 01:20:30.183058 | orchestrator | + echo 2026-04-04 01:20:30.183063 | orchestrator | + osism apply --environment openstack test 2026-04-04 01:20:31.422774 | orchestrator | 2026-04-04 01:20:31 | INFO  | Trying to run play test in environment openstack 2026-04-04 01:20:31.450813 | orchestrator | 2026-04-04 01:20:31 | INFO  | Prepare task for execution of test. 2026-04-04 01:20:31.519015 | orchestrator | 2026-04-04 01:20:31 | INFO  | Task 79477b36-86c7-4e01-9b5a-0f60ed6223e0 (test) was prepared for execution. 2026-04-04 01:20:31.519100 | orchestrator | 2026-04-04 01:20:31 | INFO  | It takes a moment until task 79477b36-86c7-4e01-9b5a-0f60ed6223e0 (test) has been started and output is visible here. 2026-04-04 01:23:44.350303 | orchestrator | 2026-04-04 01:23:44.350374 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-04 01:23:44.350381 | orchestrator | 2026-04-04 01:23:44.350386 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-04 01:23:44.350390 | orchestrator | Saturday 04 April 2026 01:20:34 +0000 (0:00:00.100) 0:00:00.100 ******** 2026-04-04 01:23:44.350394 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350398 | orchestrator | 2026-04-04 01:23:44.350402 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-04 01:23:44.350406 | orchestrator | Saturday 04 April 2026 01:20:38 +0000 (0:00:03.747) 0:00:03.847 ******** 2026-04-04 01:23:44.350410 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350414 | orchestrator | 2026-04-04 01:23:44.350418 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-04 01:23:44.350432 | orchestrator | Saturday 04 April 2026 01:20:42 +0000 (0:00:04.110) 0:00:07.958 ******** 2026-04-04 01:23:44.350436 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350439 | orchestrator | 2026-04-04 01:23:44.350443 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-04 01:23:44.350447 | orchestrator | Saturday 04 April 2026 01:20:48 +0000 (0:00:06.249) 0:00:14.207 ******** 2026-04-04 01:23:44.350451 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350454 | orchestrator | 2026-04-04 01:23:44.350458 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-04 01:23:44.350462 | orchestrator | Saturday 04 April 2026 01:20:52 +0000 (0:00:04.010) 0:00:18.217 ******** 2026-04-04 01:23:44.350466 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350470 | orchestrator | 2026-04-04 01:23:44.350482 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-04 01:23:44.350486 | orchestrator | Saturday 04 April 2026 01:20:56 +0000 (0:00:04.265) 0:00:22.483 ******** 2026-04-04 01:23:44.350489 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-04 01:23:44.350494 | orchestrator | changed: [localhost] => (item=member) 2026-04-04 01:23:44.350498 | orchestrator | changed: [localhost] => (item=creator) 2026-04-04 01:23:44.350502 | orchestrator | 2026-04-04 01:23:44.350506 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-04 01:23:44.350509 | orchestrator | Saturday 04 April 2026 01:21:08 +0000 (0:00:11.668) 0:00:34.152 ******** 2026-04-04 01:23:44.350513 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350517 | orchestrator | 2026-04-04 01:23:44.350521 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-04 01:23:44.350524 | orchestrator | Saturday 04 April 2026 01:21:13 +0000 (0:00:04.452) 0:00:38.604 ******** 2026-04-04 01:23:44.350528 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350532 | orchestrator | 2026-04-04 01:23:44.350535 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-04 01:23:44.350539 | orchestrator | Saturday 04 April 2026 01:21:17 +0000 (0:00:04.511) 0:00:43.116 ******** 2026-04-04 01:23:44.350543 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350547 | orchestrator | 2026-04-04 01:23:44.350550 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-04 01:23:44.350554 | orchestrator | Saturday 04 April 2026 01:21:22 +0000 (0:00:04.532) 0:00:47.649 ******** 2026-04-04 01:23:44.350558 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350562 | orchestrator | 2026-04-04 01:23:44.350565 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-04 01:23:44.350569 | orchestrator | Saturday 04 April 2026 01:21:26 +0000 (0:00:03.982) 0:00:51.632 ******** 2026-04-04 01:23:44.350573 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350577 | orchestrator | 2026-04-04 01:23:44.350580 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-04 01:23:44.350584 | orchestrator | Saturday 04 April 2026 01:21:30 +0000 (0:00:04.162) 0:00:55.795 ******** 2026-04-04 01:23:44.350588 | orchestrator | changed: [localhost] 2026-04-04 01:23:44.350591 | orchestrator | 2026-04-04 01:23:44.350595 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-04 01:23:44.350599 | orchestrator | Saturday 04 April 2026 01:21:34 +0000 (0:00:03.959) 0:00:59.754 ******** 2026-04-04 01:23:44.350603 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-04 01:23:44.350607 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-04 01:23:44.350611 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-04 01:23:44.350614 | orchestrator | 2026-04-04 01:23:44.350618 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-04 01:23:44.350622 | orchestrator | Saturday 04 April 2026 01:21:47 +0000 (0:00:13.746) 0:01:13.500 ******** 2026-04-04 01:23:44.350626 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-04 01:23:44.350630 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-04 01:23:44.350637 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-04 01:23:44.350640 | orchestrator | 2026-04-04 01:23:44.350644 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-04 01:23:44.350648 | orchestrator | Saturday 04 April 2026 01:22:04 +0000 (0:00:16.086) 0:01:29.587 ******** 2026-04-04 01:23:44.350652 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-04 01:23:44.350656 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-04 01:23:44.350659 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-04 01:23:44.350663 | orchestrator | 2026-04-04 01:23:44.350667 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-04 01:23:44.350671 | orchestrator | 2026-04-04 01:23:44.350674 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-04 01:23:44.350686 | orchestrator | Saturday 04 April 2026 01:22:37 +0000 (0:00:33.671) 0:02:03.258 ******** 2026-04-04 01:23:44.350691 | orchestrator | ok: [localhost] 2026-04-04 01:23:44.350695 | orchestrator | 2026-04-04 01:23:44.350701 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-04 01:23:44.350705 | orchestrator | Saturday 04 April 2026 01:22:41 +0000 (0:00:03.376) 0:02:06.634 ******** 2026-04-04 01:23:44.350708 | orchestrator | skipping: [localhost] 2026-04-04 01:23:44.350712 | orchestrator | 2026-04-04 01:23:44.350716 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-04 01:23:44.350720 | orchestrator | Saturday 04 April 2026 01:22:41 +0000 (0:00:00.050) 0:02:06.685 ******** 2026-04-04 01:23:44.350723 | orchestrator | skipping: [localhost] 2026-04-04 01:23:44.350727 | orchestrator | 2026-04-04 01:23:44.350731 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-04 01:23:44.350734 | orchestrator | Saturday 04 April 2026 01:22:41 +0000 (0:00:00.046) 0:02:06.731 ******** 2026-04-04 01:23:44.350738 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-04 01:23:44.350742 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-04 01:23:44.350746 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-04 01:23:44.350749 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-04 01:23:44.350753 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-04 01:23:44.350757 | orchestrator | skipping: [localhost] 2026-04-04 01:23:44.350761 | orchestrator | 2026-04-04 01:23:44.350764 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-04 01:23:44.350768 | orchestrator | Saturday 04 April 2026 01:22:41 +0000 (0:00:00.128) 0:02:06.860 ******** 2026-04-04 01:23:44.350772 | orchestrator | skipping: [localhost] 2026-04-04 01:23:44.350776 | orchestrator | 2026-04-04 01:23:44.350779 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-04 01:23:44.350783 | orchestrator | Saturday 04 April 2026 01:22:41 +0000 (0:00:00.119) 0:02:06.979 ******** 2026-04-04 01:23:44.350787 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:23:44.350790 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:23:44.350794 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:23:44.350798 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:23:44.350802 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:23:44.350805 | orchestrator | 2026-04-04 01:23:44.350809 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-04 01:23:44.350816 | orchestrator | Saturday 04 April 2026 01:22:45 +0000 (0:00:04.192) 0:02:11.172 ******** 2026-04-04 01:23:44.350819 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-04 01:23:44.350824 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-04 01:23:44.350828 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-04 01:23:44.350831 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-04 01:23:44.350835 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-04 01:23:44.350840 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j648736413072.2803', 'results_file': '/ansible/.ansible_async/j648736413072.2803', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:23:44.350845 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j515867014058.2828', 'results_file': '/ansible/.ansible_async/j515867014058.2828', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:23:44.350849 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j717528491764.2853', 'results_file': '/ansible/.ansible_async/j717528491764.2853', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:23:44.350853 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j338593153704.2878', 'results_file': '/ansible/.ansible_async/j338593153704.2878', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:23:44.350857 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j410892719616.2903', 'results_file': '/ansible/.ansible_async/j410892719616.2903', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:23:44.350861 | orchestrator | 2026-04-04 01:23:44.350865 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-04 01:23:44.350869 | orchestrator | Saturday 04 April 2026 01:23:43 +0000 (0:00:57.738) 0:03:08.910 ******** 2026-04-04 01:23:44.350875 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:24:56.836183 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:24:56.836315 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:24:56.836327 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:24:56.836334 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:24:56.836341 | orchestrator | 2026-04-04 01:24:56.836347 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-04 01:24:56.836354 | orchestrator | Saturday 04 April 2026 01:23:47 +0000 (0:00:04.552) 0:03:13.463 ******** 2026-04-04 01:24:56.836360 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-04 01:24:56.836370 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j144233907743.3014', 'results_file': '/ansible/.ansible_async/j144233907743.3014', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836378 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j967106167264.3039', 'results_file': '/ansible/.ansible_async/j967106167264.3039', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836406 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j784586533909.3064', 'results_file': '/ansible/.ansible_async/j784586533909.3064', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836414 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j46672467624.3089', 'results_file': '/ansible/.ansible_async/j46672467624.3089', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836418 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j319678738622.3114', 'results_file': '/ansible/.ansible_async/j319678738622.3114', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836422 | orchestrator | 2026-04-04 01:24:56.836428 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-04 01:24:56.836434 | orchestrator | Saturday 04 April 2026 01:23:57 +0000 (0:00:09.565) 0:03:23.029 ******** 2026-04-04 01:24:56.836440 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:24:56.836449 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:24:56.836456 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:24:56.836463 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:24:56.836469 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:24:56.836475 | orchestrator | 2026-04-04 01:24:56.836481 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-04 01:24:56.836487 | orchestrator | Saturday 04 April 2026 01:24:01 +0000 (0:00:04.299) 0:03:27.328 ******** 2026-04-04 01:24:56.836493 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-04 01:24:56.836499 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j46726477861.3190', 'results_file': '/ansible/.ansible_async/j46726477861.3190', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836506 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j652777937422.3215', 'results_file': '/ansible/.ansible_async/j652777937422.3215', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836513 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j755734521324.3241', 'results_file': '/ansible/.ansible_async/j755734521324.3241', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836518 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j844387106182.3267', 'results_file': '/ansible/.ansible_async/j844387106182.3267', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836556 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j398410779850.3293', 'results_file': '/ansible/.ansible_async/j398410779850.3293', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:24:56.836564 | orchestrator | 2026-04-04 01:24:56.836570 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-04 01:24:56.836576 | orchestrator | Saturday 04 April 2026 01:24:12 +0000 (0:00:10.243) 0:03:37.572 ******** 2026-04-04 01:24:56.836582 | orchestrator | changed: [localhost] 2026-04-04 01:24:56.836589 | orchestrator | 2026-04-04 01:24:56.836595 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-04 01:24:56.836610 | orchestrator | Saturday 04 April 2026 01:24:18 +0000 (0:00:06.634) 0:03:44.206 ******** 2026-04-04 01:24:56.836616 | orchestrator | changed: [localhost] 2026-04-04 01:24:56.836620 | orchestrator | 2026-04-04 01:24:56.836624 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-04 01:24:56.836628 | orchestrator | Saturday 04 April 2026 01:24:32 +0000 (0:00:14.238) 0:03:58.445 ******** 2026-04-04 01:24:56.836632 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:24:56.836636 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:24:56.836640 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:24:56.836644 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:24:56.836647 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:24:56.836651 | orchestrator | 2026-04-04 01:24:56.836655 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-04 01:24:56.836659 | orchestrator | Saturday 04 April 2026 01:24:56 +0000 (0:00:23.642) 0:04:22.087 ******** 2026-04-04 01:24:56.836663 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-04 01:24:56.836666 | orchestrator |  "msg": "test: 192.168.112.117" 2026-04-04 01:24:56.836670 | orchestrator | } 2026-04-04 01:24:56.836675 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-04 01:24:56.836679 | orchestrator |  "msg": "test-1: 192.168.112.188" 2026-04-04 01:24:56.836685 | orchestrator | } 2026-04-04 01:24:56.836693 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-04 01:24:56.836701 | orchestrator |  "msg": "test-2: 192.168.112.167" 2026-04-04 01:24:56.836708 | orchestrator | } 2026-04-04 01:24:56.836713 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-04 01:24:56.836719 | orchestrator |  "msg": "test-3: 192.168.112.182" 2026-04-04 01:24:56.836725 | orchestrator | } 2026-04-04 01:24:56.836732 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-04 01:24:56.836737 | orchestrator |  "msg": "test-4: 192.168.112.136" 2026-04-04 01:24:56.836743 | orchestrator | } 2026-04-04 01:24:56.836748 | orchestrator | 2026-04-04 01:24:56.836754 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:24:56.836760 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 01:24:56.836767 | orchestrator | 2026-04-04 01:24:56.836773 | orchestrator | 2026-04-04 01:24:56.836779 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:24:56.836785 | orchestrator | Saturday 04 April 2026 01:24:56 +0000 (0:00:00.120) 0:04:22.207 ******** 2026-04-04 01:24:56.836791 | orchestrator | =============================================================================== 2026-04-04 01:24:56.836797 | orchestrator | Wait for instance creation to complete --------------------------------- 57.74s 2026-04-04 01:24:56.836804 | orchestrator | Create test routers ---------------------------------------------------- 33.67s 2026-04-04 01:24:56.836810 | orchestrator | Create floating ip addresses ------------------------------------------- 23.64s 2026-04-04 01:24:56.836817 | orchestrator | Create test subnets ---------------------------------------------------- 16.09s 2026-04-04 01:24:56.836823 | orchestrator | Attach test volume ----------------------------------------------------- 14.24s 2026-04-04 01:24:56.836828 | orchestrator | Create test networks --------------------------------------------------- 13.75s 2026-04-04 01:24:56.836835 | orchestrator | Add member roles to user test ------------------------------------------ 11.67s 2026-04-04 01:24:56.836841 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.24s 2026-04-04 01:24:56.836846 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.57s 2026-04-04 01:24:56.836850 | orchestrator | Create test volume ------------------------------------------------------ 6.63s 2026-04-04 01:24:56.836855 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.25s 2026-04-04 01:24:56.836886 | orchestrator | Add metadata to instances ----------------------------------------------- 4.55s 2026-04-04 01:24:56.836891 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.53s 2026-04-04 01:24:56.836896 | orchestrator | Create ssh security group ----------------------------------------------- 4.51s 2026-04-04 01:24:56.836900 | orchestrator | Create test server group ------------------------------------------------ 4.45s 2026-04-04 01:24:56.836912 | orchestrator | Add tag to instances ---------------------------------------------------- 4.30s 2026-04-04 01:24:56.836916 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2026-04-04 01:24:56.836921 | orchestrator | Create test instances --------------------------------------------------- 4.19s 2026-04-04 01:24:56.836925 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.16s 2026-04-04 01:24:56.836929 | orchestrator | Create test-admin user -------------------------------------------------- 4.11s 2026-04-04 01:24:57.038980 | orchestrator | + server_list 2026-04-04 01:24:57.039083 | orchestrator | + openstack --os-cloud test server list 2026-04-04 01:25:00.616040 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:25:00.616151 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-04 01:25:00.616164 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:25:00.616169 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | test-3=192.168.112.136, 192.168.202.237 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:25:00.616173 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | test-2=192.168.112.182, 192.168.201.202 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:25:00.616177 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | test-2=192.168.112.167, 192.168.201.116 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:25:00.616182 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | test-1=192.168.112.117, 192.168.200.198 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:25:00.616186 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | test-1=192.168.112.188, 192.168.200.8 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:25:00.616190 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:25:00.867928 | orchestrator | + openstack --os-cloud test server show test 2026-04-04 01:25:03.942085 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:03.942141 | orchestrator | | Field | Value | 2026-04-04 01:25:03.942148 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:03.942154 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:25:03.942168 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:25:03.942173 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:25:03.942178 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-04 01:25:03.942183 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:25:03.942187 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:25:03.942204 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:25:03.942209 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:25:03.942214 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:25:03.942218 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:25:03.942226 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:25:03.942231 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:25:03.942294 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:25:03.942306 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:25:03.942317 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:25:03.942324 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:23:20.000000 | 2026-04-04 01:25:03.942336 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:25:03.942343 | orchestrator | | accessIPv4 | | 2026-04-04 01:25:03.942350 | orchestrator | | accessIPv6 | | 2026-04-04 01:25:03.942360 | orchestrator | | addresses | test-1=192.168.112.117, 192.168.200.198 | 2026-04-04 01:25:03.942366 | orchestrator | | config_drive | | 2026-04-04 01:25:03.942370 | orchestrator | | created | 2026-04-04T01:22:50Z | 2026-04-04 01:25:03.942375 | orchestrator | | description | None | 2026-04-04 01:25:03.942380 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:25:03.942387 | orchestrator | | hostId | 6e25f36c2947e1c18d185803bbfb6dd781bddf8130a261f342478e4d | 2026-04-04 01:25:03.942391 | orchestrator | | host_status | None | 2026-04-04 01:25:03.942399 | orchestrator | | id | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | 2026-04-04 01:25:03.942404 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:25:03.942412 | orchestrator | | key_name | test | 2026-04-04 01:25:03.942416 | orchestrator | | locked | False | 2026-04-04 01:25:03.942421 | orchestrator | | locked_reason | None | 2026-04-04 01:25:03.942426 | orchestrator | | name | test | 2026-04-04 01:25:03.942430 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:25:03.942449 | orchestrator | | progress | 0 | 2026-04-04 01:25:03.942458 | orchestrator | | project_id | 6f0adbfe74ee4d24a416f6628c8b507b | 2026-04-04 01:25:03.942466 | orchestrator | | properties | hostname='test' | 2026-04-04 01:25:03.942476 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:25:03.942483 | orchestrator | | | name='ssh' | 2026-04-04 01:25:03.942500 | orchestrator | | server_groups | None | 2026-04-04 01:25:03.942507 | orchestrator | | status | ACTIVE | 2026-04-04 01:25:03.942514 | orchestrator | | tags | test | 2026-04-04 01:25:03.942520 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:25:03.942526 | orchestrator | | updated | 2026-04-04T01:23:49Z | 2026-04-04 01:25:03.942538 | orchestrator | | user_id | fea4373ad8cd4d538e8254e1cd8f933f | 2026-04-04 01:25:03.942546 | orchestrator | | volumes_attached | delete_on_termination='True', id='3f67c648-d9c9-4d0a-9019-69608c9a8f09' | 2026-04-04 01:25:03.942553 | orchestrator | | | delete_on_termination='False', id='757f80f7-1a95-4dbe-84b6-c3e8d4b974d4' | 2026-04-04 01:25:03.946422 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:04.197859 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-04 01:25:07.003993 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:07.004059 | orchestrator | | Field | Value | 2026-04-04 01:25:07.004068 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:07.004075 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:25:07.004082 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:25:07.004089 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:25:07.004096 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-04 01:25:07.004103 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:25:07.004110 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:25:07.004147 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:25:07.004155 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:25:07.004161 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:25:07.004168 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:25:07.004175 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:25:07.004193 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:25:07.004201 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:25:07.004211 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:25:07.004219 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:25:07.004231 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:23:21.000000 | 2026-04-04 01:25:07.004275 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:25:07.004283 | orchestrator | | accessIPv4 | | 2026-04-04 01:25:07.004290 | orchestrator | | accessIPv6 | | 2026-04-04 01:25:07.004297 | orchestrator | | addresses | test-1=192.168.112.188, 192.168.200.8 | 2026-04-04 01:25:07.004304 | orchestrator | | config_drive | | 2026-04-04 01:25:07.004312 | orchestrator | | created | 2026-04-04T01:22:50Z | 2026-04-04 01:25:07.004319 | orchestrator | | description | None | 2026-04-04 01:25:07.004328 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:25:07.004340 | orchestrator | | hostId | 6e25f36c2947e1c18d185803bbfb6dd781bddf8130a261f342478e4d | 2026-04-04 01:25:07.004347 | orchestrator | | host_status | None | 2026-04-04 01:25:07.004359 | orchestrator | | id | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | 2026-04-04 01:25:07.004367 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:25:07.004374 | orchestrator | | key_name | test | 2026-04-04 01:25:07.004381 | orchestrator | | locked | False | 2026-04-04 01:25:07.004388 | orchestrator | | locked_reason | None | 2026-04-04 01:25:07.004395 | orchestrator | | name | test-1 | 2026-04-04 01:25:07.004402 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:25:07.004412 | orchestrator | | progress | 0 | 2026-04-04 01:25:07.004422 | orchestrator | | project_id | 6f0adbfe74ee4d24a416f6628c8b507b | 2026-04-04 01:25:07.004429 | orchestrator | | properties | hostname='test-1' | 2026-04-04 01:25:07.004440 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:25:07.004447 | orchestrator | | | name='ssh' | 2026-04-04 01:25:07.004453 | orchestrator | | server_groups | None | 2026-04-04 01:25:07.004460 | orchestrator | | status | ACTIVE | 2026-04-04 01:25:07.004466 | orchestrator | | tags | test | 2026-04-04 01:25:07.004473 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:25:07.004479 | orchestrator | | updated | 2026-04-04T01:23:50Z | 2026-04-04 01:25:07.004492 | orchestrator | | user_id | fea4373ad8cd4d538e8254e1cd8f933f | 2026-04-04 01:25:07.004498 | orchestrator | | volumes_attached | delete_on_termination='True', id='dc5c825a-4bd6-4592-8fe9-bd1024b697fc' | 2026-04-04 01:25:07.009375 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:07.260675 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-04 01:25:10.049353 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:10.049414 | orchestrator | | Field | Value | 2026-04-04 01:25:10.049423 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:10.049429 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:25:10.049436 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:25:10.049442 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:25:10.049461 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-04 01:25:10.049473 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:25:10.049479 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:25:10.049496 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:25:10.049503 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:25:10.049509 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:25:10.049516 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:25:10.049522 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:25:10.049528 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:25:10.049539 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:25:10.049548 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:25:10.049554 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:25:10.049561 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:23:19.000000 | 2026-04-04 01:25:10.049571 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:25:10.049578 | orchestrator | | accessIPv4 | | 2026-04-04 01:25:10.049584 | orchestrator | | accessIPv6 | | 2026-04-04 01:25:10.049590 | orchestrator | | addresses | test-2=192.168.112.167, 192.168.201.116 | 2026-04-04 01:25:10.049597 | orchestrator | | config_drive | | 2026-04-04 01:25:10.049607 | orchestrator | | created | 2026-04-04T01:22:50Z | 2026-04-04 01:25:10.049613 | orchestrator | | description | None | 2026-04-04 01:25:10.049619 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:25:10.049630 | orchestrator | | hostId | 6e25f36c2947e1c18d185803bbfb6dd781bddf8130a261f342478e4d | 2026-04-04 01:25:10.049637 | orchestrator | | host_status | None | 2026-04-04 01:25:10.049647 | orchestrator | | id | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | 2026-04-04 01:25:10.049654 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:25:10.049660 | orchestrator | | key_name | test | 2026-04-04 01:25:10.049667 | orchestrator | | locked | False | 2026-04-04 01:25:10.049673 | orchestrator | | locked_reason | None | 2026-04-04 01:25:10.049685 | orchestrator | | name | test-2 | 2026-04-04 01:25:10.049691 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:25:10.049700 | orchestrator | | progress | 0 | 2026-04-04 01:25:10.049706 | orchestrator | | project_id | 6f0adbfe74ee4d24a416f6628c8b507b | 2026-04-04 01:25:10.049712 | orchestrator | | properties | hostname='test-2' | 2026-04-04 01:25:10.049722 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:25:10.049729 | orchestrator | | | name='ssh' | 2026-04-04 01:25:10.049735 | orchestrator | | server_groups | None | 2026-04-04 01:25:10.049742 | orchestrator | | status | ACTIVE | 2026-04-04 01:25:10.049751 | orchestrator | | tags | test | 2026-04-04 01:25:10.049758 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:25:10.049764 | orchestrator | | updated | 2026-04-04T01:23:50Z | 2026-04-04 01:25:10.049773 | orchestrator | | user_id | fea4373ad8cd4d538e8254e1cd8f933f | 2026-04-04 01:25:10.049779 | orchestrator | | volumes_attached | delete_on_termination='True', id='2f950698-33e5-4e3f-930c-cbf36eee2c7d' | 2026-04-04 01:25:10.054117 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:10.374270 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-04 01:25:13.335868 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:13.335959 | orchestrator | | Field | Value | 2026-04-04 01:25:13.335969 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:13.335999 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:25:13.336006 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:25:13.336011 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:25:13.336017 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-04 01:25:13.336037 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:25:13.336043 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:25:13.336065 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:25:13.336071 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:25:13.336077 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:25:13.336090 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:25:13.336098 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:25:13.336103 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:25:13.336110 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:25:13.336117 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:25:13.336127 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:25:13.336132 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:23:20.000000 | 2026-04-04 01:25:13.336141 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:25:13.336148 | orchestrator | | accessIPv4 | | 2026-04-04 01:25:13.336154 | orchestrator | | accessIPv6 | | 2026-04-04 01:25:13.336164 | orchestrator | | addresses | test-2=192.168.112.182, 192.168.201.202 | 2026-04-04 01:25:13.336170 | orchestrator | | config_drive | | 2026-04-04 01:25:13.336175 | orchestrator | | created | 2026-04-04T01:22:51Z | 2026-04-04 01:25:13.336182 | orchestrator | | description | None | 2026-04-04 01:25:13.336188 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:25:13.336197 | orchestrator | | hostId | 6e25f36c2947e1c18d185803bbfb6dd781bddf8130a261f342478e4d | 2026-04-04 01:25:13.336204 | orchestrator | | host_status | None | 2026-04-04 01:25:13.336215 | orchestrator | | id | f6a8ec72-d283-4b08-9ad1-5872494cf29d | 2026-04-04 01:25:13.336221 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:25:13.336280 | orchestrator | | key_name | test | 2026-04-04 01:25:13.336287 | orchestrator | | locked | False | 2026-04-04 01:25:13.336290 | orchestrator | | locked_reason | None | 2026-04-04 01:25:13.336294 | orchestrator | | name | test-3 | 2026-04-04 01:25:13.336298 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:25:13.336302 | orchestrator | | progress | 0 | 2026-04-04 01:25:13.336309 | orchestrator | | project_id | 6f0adbfe74ee4d24a416f6628c8b507b | 2026-04-04 01:25:13.336313 | orchestrator | | properties | hostname='test-3' | 2026-04-04 01:25:13.336323 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:25:13.336330 | orchestrator | | | name='ssh' | 2026-04-04 01:25:13.336334 | orchestrator | | server_groups | None | 2026-04-04 01:25:13.336338 | orchestrator | | status | ACTIVE | 2026-04-04 01:25:13.336342 | orchestrator | | tags | test | 2026-04-04 01:25:13.336346 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:25:13.336350 | orchestrator | | updated | 2026-04-04T01:23:50Z | 2026-04-04 01:25:13.336354 | orchestrator | | user_id | fea4373ad8cd4d538e8254e1cd8f933f | 2026-04-04 01:25:13.336358 | orchestrator | | volumes_attached | delete_on_termination='True', id='48ed107d-4055-4118-99e9-62f6300f943d' | 2026-04-04 01:25:13.341501 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:13.613144 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-04 01:25:16.532070 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:16.532160 | orchestrator | | Field | Value | 2026-04-04 01:25:16.532170 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:16.532177 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:25:16.532184 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:25:16.532205 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:25:16.532212 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-04 01:25:16.532218 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:25:16.532371 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:25:16.532428 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:25:16.532436 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:25:16.532442 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:25:16.532449 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:25:16.532454 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:25:16.532467 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:25:16.532473 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:25:16.532480 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:25:16.532486 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:25:16.532491 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:23:18.000000 | 2026-04-04 01:25:16.532508 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:25:16.532515 | orchestrator | | accessIPv4 | | 2026-04-04 01:25:16.532520 | orchestrator | | accessIPv6 | | 2026-04-04 01:25:16.532527 | orchestrator | | addresses | test-3=192.168.112.136, 192.168.202.237 | 2026-04-04 01:25:16.532532 | orchestrator | | config_drive | | 2026-04-04 01:25:16.532543 | orchestrator | | created | 2026-04-04T01:22:52Z | 2026-04-04 01:25:16.532550 | orchestrator | | description | None | 2026-04-04 01:25:16.532557 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:25:16.532562 | orchestrator | | hostId | 3953ec0ce67e64150fd3472327c2a25a6e2cfaeec03a663097ea31f3 | 2026-04-04 01:25:16.532574 | orchestrator | | host_status | None | 2026-04-04 01:25:16.532586 | orchestrator | | id | 7e50c4b8-466c-4930-b43d-8a691787941d | 2026-04-04 01:25:16.532593 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:25:16.532600 | orchestrator | | key_name | test | 2026-04-04 01:25:16.532606 | orchestrator | | locked | False | 2026-04-04 01:25:16.532612 | orchestrator | | locked_reason | None | 2026-04-04 01:25:16.532621 | orchestrator | | name | test-4 | 2026-04-04 01:25:16.532628 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:25:16.532634 | orchestrator | | progress | 0 | 2026-04-04 01:25:16.532646 | orchestrator | | project_id | 6f0adbfe74ee4d24a416f6628c8b507b | 2026-04-04 01:25:16.532652 | orchestrator | | properties | hostname='test-4' | 2026-04-04 01:25:16.532663 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:25:16.532669 | orchestrator | | | name='ssh' | 2026-04-04 01:25:16.532676 | orchestrator | | server_groups | None | 2026-04-04 01:25:16.532683 | orchestrator | | status | ACTIVE | 2026-04-04 01:25:16.532690 | orchestrator | | tags | test | 2026-04-04 01:25:16.532700 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:25:16.532706 | orchestrator | | updated | 2026-04-04T01:23:51Z | 2026-04-04 01:25:16.532724 | orchestrator | | user_id | fea4373ad8cd4d538e8254e1cd8f933f | 2026-04-04 01:25:16.532732 | orchestrator | | volumes_attached | delete_on_termination='True', id='22b17e12-b3b1-4a21-9696-5ad6287d828e' | 2026-04-04 01:25:16.536941 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:25:16.827576 | orchestrator | + server_ping 2026-04-04 01:25:16.828399 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:25:16.829295 | orchestrator | ++ tr -d '\r' 2026-04-04 01:25:19.584874 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:19.584971 | orchestrator | + ping -c3 192.168.112.182 2026-04-04 01:25:19.598092 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-04 01:25:19.598167 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.00 ms 2026-04-04 01:25:20.594791 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.48 ms 2026-04-04 01:25:21.595865 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.42 ms 2026-04-04 01:25:21.595937 | orchestrator | 2026-04-04 01:25:21.595944 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-04 01:25:21.595950 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:25:21.595957 | orchestrator | rtt min/avg/max/mdev = 1.424/3.634/7.002/2.419 ms 2026-04-04 01:25:21.596061 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:21.596071 | orchestrator | + ping -c3 192.168.112.167 2026-04-04 01:25:21.606973 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-04-04 01:25:21.607055 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=5.95 ms 2026-04-04 01:25:22.604352 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=1.99 ms 2026-04-04 01:25:23.604662 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=1.17 ms 2026-04-04 01:25:23.604724 | orchestrator | 2026-04-04 01:25:23.604733 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-04-04 01:25:23.604740 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-04 01:25:23.604747 | orchestrator | rtt min/avg/max/mdev = 1.165/3.033/5.949/2.088 ms 2026-04-04 01:25:23.605461 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:23.605475 | orchestrator | + ping -c3 192.168.112.117 2026-04-04 01:25:23.613950 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-04 01:25:23.613995 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=3.56 ms 2026-04-04 01:25:24.613611 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.45 ms 2026-04-04 01:25:25.615419 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.10 ms 2026-04-04 01:25:25.615509 | orchestrator | 2026-04-04 01:25:25.615518 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-04 01:25:25.615523 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:25:25.615527 | orchestrator | rtt min/avg/max/mdev = 1.103/2.037/3.558/1.084 ms 2026-04-04 01:25:25.615545 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:25.615556 | orchestrator | + ping -c3 192.168.112.136 2026-04-04 01:25:25.624061 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-04 01:25:25.624118 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=3.48 ms 2026-04-04 01:25:26.623978 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=1.50 ms 2026-04-04 01:25:27.625692 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.40 ms 2026-04-04 01:25:27.625756 | orchestrator | 2026-04-04 01:25:27.625766 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-04 01:25:27.625788 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:25:27.625798 | orchestrator | rtt min/avg/max/mdev = 1.395/2.125/3.484/0.961 ms 2026-04-04 01:25:27.626570 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:27.626606 | orchestrator | + ping -c3 192.168.112.188 2026-04-04 01:25:27.638161 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-04 01:25:27.638246 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.98 ms 2026-04-04 01:25:28.634691 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.16 ms 2026-04-04 01:25:29.635978 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.47 ms 2026-04-04 01:25:29.636035 | orchestrator | 2026-04-04 01:25:29.636044 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-04 01:25:29.636052 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:25:29.636059 | orchestrator | rtt min/avg/max/mdev = 1.474/3.538/6.978/2.448 ms 2026-04-04 01:25:29.636065 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:25:29.636072 | orchestrator | + compute_list 2026-04-04 01:25:29.636078 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:25:31.231962 | orchestrator | 2026-04-04 01:25:31 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:31.232035 | orchestrator | 2026-04-04 01:25:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:31.232043 | orchestrator | 2026-04-04 01:25:31 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:34.645664 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:34.645715 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:34.645721 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:25:34.645727 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | 2026-04-04 01:25:34.645735 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | 2026-04-04 01:25:34.645742 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | 2026-04-04 01:25:34.645749 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | 2026-04-04 01:25:34.645756 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:34.955656 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:25:36.554350 | orchestrator | 2026-04-04 01:25:36 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:36.554440 | orchestrator | 2026-04-04 01:25:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:36.554449 | orchestrator | 2026-04-04 01:25:36 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:38.172640 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:38.172710 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:38.172717 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:25:38.172721 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | 2026-04-04 01:25:38.172726 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:38.484946 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:25:39.980581 | orchestrator | 2026-04-04 01:25:39 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:39.980700 | orchestrator | 2026-04-04 01:25:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:39.980716 | orchestrator | 2026-04-04 01:25:39 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:41.167662 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:41.167761 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:41.167771 | orchestrator | |------+--------+----------| 2026-04-04 01:25:41.167778 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:41.512770 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-04 01:25:43.134623 | orchestrator | 2026-04-04 01:25:43 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:43.134695 | orchestrator | 2026-04-04 01:25:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:43.134703 | orchestrator | 2026-04-04 01:25:43 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:44.763810 | orchestrator | 2026-04-04 01:25:44 | INFO  | Live migrating server 7e50c4b8-466c-4930-b43d-8a691787941d 2026-04-04 01:25:58.498960 | orchestrator | 2026-04-04 01:25:58 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:00.948801 | orchestrator | 2026-04-04 01:26:00 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:03.293883 | orchestrator | 2026-04-04 01:26:03 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:05.597771 | orchestrator | 2026-04-04 01:26:05 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:08.007333 | orchestrator | 2026-04-04 01:26:08 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:10.305389 | orchestrator | 2026-04-04 01:26:10 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:12.592321 | orchestrator | 2026-04-04 01:26:12 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:14.784421 | orchestrator | 2026-04-04 01:26:14 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:26:17.024290 | orchestrator | 2026-04-04 01:26:17 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) completed with status ACTIVE 2026-04-04 01:26:17.327253 | orchestrator | + compute_list 2026-04-04 01:26:17.327328 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:26:18.879906 | orchestrator | 2026-04-04 01:26:18 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:18.879974 | orchestrator | 2026-04-04 01:26:18 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:18.879985 | orchestrator | 2026-04-04 01:26:18 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:20.289313 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:26:20.289411 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:20.289422 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:26:20.289429 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | 2026-04-04 01:26:20.289436 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | 2026-04-04 01:26:20.289443 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | 2026-04-04 01:26:20.289481 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | 2026-04-04 01:26:20.289489 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | 2026-04-04 01:26:20.289495 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:26:20.675633 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:26:22.360418 | orchestrator | 2026-04-04 01:26:22 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:22.361529 | orchestrator | 2026-04-04 01:26:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:22.361583 | orchestrator | 2026-04-04 01:26:22 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:23.433607 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:23.433701 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:23.433711 | orchestrator | |------+--------+----------| 2026-04-04 01:26:23.433716 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:23.767389 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:26:25.319076 | orchestrator | 2026-04-04 01:26:25 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:25.320711 | orchestrator | 2026-04-04 01:26:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:25.320783 | orchestrator | 2026-04-04 01:26:25 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:26.443108 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:26.443194 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:26.443203 | orchestrator | |------+--------+----------| 2026-04-04 01:26:26.443207 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:26.742939 | orchestrator | + server_ping 2026-04-04 01:26:26.744086 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:26:26.744738 | orchestrator | ++ tr -d '\r' 2026-04-04 01:26:29.486247 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:29.486332 | orchestrator | + ping -c3 192.168.112.182 2026-04-04 01:26:29.497440 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-04 01:26:29.497513 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.24 ms 2026-04-04 01:26:30.494558 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.58 ms 2026-04-04 01:26:31.494815 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.16 ms 2026-04-04 01:26:31.494943 | orchestrator | 2026-04-04 01:26:31.494953 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-04 01:26:31.494961 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:26:31.494968 | orchestrator | rtt min/avg/max/mdev = 1.160/3.657/7.236/2.595 ms 2026-04-04 01:26:31.494982 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:31.494989 | orchestrator | + ping -c3 192.168.112.167 2026-04-04 01:26:31.505706 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-04-04 01:26:31.505760 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=5.23 ms 2026-04-04 01:26:32.503473 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=1.48 ms 2026-04-04 01:26:33.504622 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=1.36 ms 2026-04-04 01:26:33.504832 | orchestrator | 2026-04-04 01:26:33.504853 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-04-04 01:26:33.504863 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:26:33.504872 | orchestrator | rtt min/avg/max/mdev = 1.358/2.688/5.231/1.798 ms 2026-04-04 01:26:33.504889 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:33.504898 | orchestrator | + ping -c3 192.168.112.117 2026-04-04 01:26:33.514566 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-04 01:26:33.514616 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=4.54 ms 2026-04-04 01:26:34.513635 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.60 ms 2026-04-04 01:26:35.516717 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.94 ms 2026-04-04 01:26:35.516806 | orchestrator | 2026-04-04 01:26:35.516815 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-04 01:26:35.516824 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:26:35.516832 | orchestrator | rtt min/avg/max/mdev = 1.598/2.693/4.544/1.315 ms 2026-04-04 01:26:35.516839 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:35.516847 | orchestrator | + ping -c3 192.168.112.136 2026-04-04 01:26:35.529150 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-04 01:26:35.529255 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=7.72 ms 2026-04-04 01:26:36.526433 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.74 ms 2026-04-04 01:26:37.527085 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-04 01:26:37.527200 | orchestrator | 2026-04-04 01:26:37.527213 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-04 01:26:37.527220 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:26:37.527229 | orchestrator | rtt min/avg/max/mdev = 1.670/4.043/7.719/2.635 ms 2026-04-04 01:26:37.527545 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:37.527569 | orchestrator | + ping -c3 192.168.112.188 2026-04-04 01:26:37.540236 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-04 01:26:37.540320 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=7.68 ms 2026-04-04 01:26:38.537015 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.38 ms 2026-04-04 01:26:39.537795 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.37 ms 2026-04-04 01:26:39.537869 | orchestrator | 2026-04-04 01:26:39.537879 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-04 01:26:39.537888 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:26:39.537895 | orchestrator | rtt min/avg/max/mdev = 1.369/3.810/7.684/2.769 ms 2026-04-04 01:26:39.538549 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-04 01:26:41.189015 | orchestrator | 2026-04-04 01:26:41 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:41.189101 | orchestrator | 2026-04-04 01:26:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:41.189118 | orchestrator | 2026-04-04 01:26:41 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:42.210410 | orchestrator | 2026-04-04 01:26:42 | INFO  | No migratable instances found on node testbed-node-5 2026-04-04 01:26:42.557102 | orchestrator | + compute_list 2026-04-04 01:26:42.557185 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:26:44.133320 | orchestrator | 2026-04-04 01:26:44 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:44.133406 | orchestrator | 2026-04-04 01:26:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:44.133420 | orchestrator | 2026-04-04 01:26:44 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:45.686182 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:26:45.686254 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:45.686260 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:26:45.686265 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | 2026-04-04 01:26:45.686269 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | 2026-04-04 01:26:45.686274 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | 2026-04-04 01:26:45.686278 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | 2026-04-04 01:26:45.686282 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | 2026-04-04 01:26:45.686307 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:26:46.055047 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:26:47.668177 | orchestrator | 2026-04-04 01:26:47 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:47.668245 | orchestrator | 2026-04-04 01:26:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:47.668252 | orchestrator | 2026-04-04 01:26:47 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:48.863120 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:48.863262 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:48.863275 | orchestrator | |------+--------+----------| 2026-04-04 01:26:48.863281 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:49.151458 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:26:50.760268 | orchestrator | 2026-04-04 01:26:50 | ERROR  | Unable to get ansible vault password 2026-04-04 01:26:50.760363 | orchestrator | 2026-04-04 01:26:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:26:50.760377 | orchestrator | 2026-04-04 01:26:50 | ERROR  | Dropping encrypted entries 2026-04-04 01:26:51.776363 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:51.776428 | orchestrator | | ID | Name | Status | 2026-04-04 01:26:51.776437 | orchestrator | |------+--------+----------| 2026-04-04 01:26:51.776442 | orchestrator | +------+--------+----------+ 2026-04-04 01:26:52.086215 | orchestrator | + server_ping 2026-04-04 01:26:52.088442 | orchestrator | ++ tr -d '\r' 2026-04-04 01:26:52.088498 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:26:54.651009 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:54.651064 | orchestrator | + ping -c3 192.168.112.182 2026-04-04 01:26:54.658350 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-04 01:26:54.658410 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=4.05 ms 2026-04-04 01:26:55.658391 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.36 ms 2026-04-04 01:26:56.659398 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.76 ms 2026-04-04 01:26:56.659469 | orchestrator | 2026-04-04 01:26:56.659476 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-04 01:26:56.659481 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:26:56.659486 | orchestrator | rtt min/avg/max/mdev = 1.764/2.726/4.054/0.969 ms 2026-04-04 01:26:56.660253 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:56.660276 | orchestrator | + ping -c3 192.168.112.167 2026-04-04 01:26:56.671811 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-04-04 01:26:56.671901 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=6.39 ms 2026-04-04 01:26:57.668787 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=1.89 ms 2026-04-04 01:26:58.669759 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-04 01:26:58.669836 | orchestrator | 2026-04-04 01:26:58.669843 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-04-04 01:26:58.669848 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:26:58.669853 | orchestrator | rtt min/avg/max/mdev = 1.708/3.330/6.393/2.166 ms 2026-04-04 01:26:58.670411 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:26:58.670438 | orchestrator | + ping -c3 192.168.112.117 2026-04-04 01:26:58.681574 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-04 01:26:58.681652 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.00 ms 2026-04-04 01:26:59.678880 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.07 ms 2026-04-04 01:27:00.680940 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.94 ms 2026-04-04 01:27:00.681055 | orchestrator | 2026-04-04 01:27:00.681065 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-04 01:27:00.681071 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-04 01:27:00.681076 | orchestrator | rtt min/avg/max/mdev = 1.937/3.335/6.003/1.886 ms 2026-04-04 01:27:00.681083 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:27:00.681090 | orchestrator | + ping -c3 192.168.112.136 2026-04-04 01:27:00.692311 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-04 01:27:00.692392 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=6.02 ms 2026-04-04 01:27:01.689960 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.28 ms 2026-04-04 01:27:02.691754 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.72 ms 2026-04-04 01:27:02.691844 | orchestrator | 2026-04-04 01:27:02.691856 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-04 01:27:02.691864 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:27:02.691873 | orchestrator | rtt min/avg/max/mdev = 1.718/3.341/6.022/1.909 ms 2026-04-04 01:27:02.691880 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:27:02.691887 | orchestrator | + ping -c3 192.168.112.188 2026-04-04 01:27:02.703963 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-04 01:27:02.704048 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=7.36 ms 2026-04-04 01:27:03.700757 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.42 ms 2026-04-04 01:27:04.701967 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.86 ms 2026-04-04 01:27:04.702083 | orchestrator | 2026-04-04 01:27:04.702096 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-04 01:27:04.702107 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:27:04.702114 | orchestrator | rtt min/avg/max/mdev = 1.857/3.877/7.359/2.472 ms 2026-04-04 01:27:04.702415 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-04 01:27:06.356720 | orchestrator | 2026-04-04 01:27:06 | ERROR  | Unable to get ansible vault password 2026-04-04 01:27:06.356804 | orchestrator | 2026-04-04 01:27:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:27:06.356815 | orchestrator | 2026-04-04 01:27:06 | ERROR  | Dropping encrypted entries 2026-04-04 01:27:07.803316 | orchestrator | 2026-04-04 01:27:07 | INFO  | Live migrating server 7e50c4b8-466c-4930-b43d-8a691787941d 2026-04-04 01:27:21.160849 | orchestrator | 2026-04-04 01:27:21 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:23.539040 | orchestrator | 2026-04-04 01:27:23 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:25.924594 | orchestrator | 2026-04-04 01:27:25 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:28.238280 | orchestrator | 2026-04-04 01:27:28 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:30.651774 | orchestrator | 2026-04-04 01:27:30 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:32.993188 | orchestrator | 2026-04-04 01:27:32 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:35.224898 | orchestrator | 2026-04-04 01:27:35 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:37.520064 | orchestrator | 2026-04-04 01:27:37 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:27:39.781121 | orchestrator | 2026-04-04 01:27:39 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) completed with status ACTIVE 2026-04-04 01:27:39.781199 | orchestrator | 2026-04-04 01:27:39 | INFO  | Live migrating server f6a8ec72-d283-4b08-9ad1-5872494cf29d 2026-04-04 01:27:51.614880 | orchestrator | 2026-04-04 01:27:51 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:27:54.030286 | orchestrator | 2026-04-04 01:27:54 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:27:56.458459 | orchestrator | 2026-04-04 01:27:56 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:27:58.730114 | orchestrator | 2026-04-04 01:27:58 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:28:01.138130 | orchestrator | 2026-04-04 01:28:01 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:28:03.449330 | orchestrator | 2026-04-04 01:28:03 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:28:05.680804 | orchestrator | 2026-04-04 01:28:05 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:28:08.086250 | orchestrator | 2026-04-04 01:28:08 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:28:10.508925 | orchestrator | 2026-04-04 01:28:10 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) completed with status ACTIVE 2026-04-04 01:28:10.508994 | orchestrator | 2026-04-04 01:28:10 | INFO  | Live migrating server 1b50d71d-6e54-49f7-82a9-386ac20253f3 2026-04-04 01:28:23.109550 | orchestrator | 2026-04-04 01:28:23 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:25.343006 | orchestrator | 2026-04-04 01:28:25 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:27.657704 | orchestrator | 2026-04-04 01:28:27 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:30.028115 | orchestrator | 2026-04-04 01:28:30 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:32.308608 | orchestrator | 2026-04-04 01:28:32 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:34.559232 | orchestrator | 2026-04-04 01:28:34 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:36.912716 | orchestrator | 2026-04-04 01:28:36 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:39.228624 | orchestrator | 2026-04-04 01:28:39 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:28:41.553676 | orchestrator | 2026-04-04 01:28:41 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) completed with status ACTIVE 2026-04-04 01:28:41.553818 | orchestrator | 2026-04-04 01:28:41 | INFO  | Live migrating server 72b650d0-74af-4ee6-aa5f-93ae779b1e72 2026-04-04 01:28:52.597539 | orchestrator | 2026-04-04 01:28:52 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:28:55.003741 | orchestrator | 2026-04-04 01:28:55 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:28:57.371353 | orchestrator | 2026-04-04 01:28:57 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:28:59.655744 | orchestrator | 2026-04-04 01:28:59 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:01.967084 | orchestrator | 2026-04-04 01:29:01 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:04.219652 | orchestrator | 2026-04-04 01:29:04 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:06.497987 | orchestrator | 2026-04-04 01:29:06 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:08.897420 | orchestrator | 2026-04-04 01:29:08 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:11.315689 | orchestrator | 2026-04-04 01:29:11 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:13.571778 | orchestrator | 2026-04-04 01:29:13 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:29:15.880088 | orchestrator | 2026-04-04 01:29:15 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) completed with status ACTIVE 2026-04-04 01:29:15.880182 | orchestrator | 2026-04-04 01:29:15 | INFO  | Live migrating server 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a 2026-04-04 01:29:27.008038 | orchestrator | 2026-04-04 01:29:27 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:29.309036 | orchestrator | 2026-04-04 01:29:29 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:31.638394 | orchestrator | 2026-04-04 01:29:31 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:33.974483 | orchestrator | 2026-04-04 01:29:33 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:36.220219 | orchestrator | 2026-04-04 01:29:36 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:38.442162 | orchestrator | 2026-04-04 01:29:38 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:40.726541 | orchestrator | 2026-04-04 01:29:40 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:43.008551 | orchestrator | 2026-04-04 01:29:43 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:29:45.323336 | orchestrator | 2026-04-04 01:29:45 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) completed with status ACTIVE 2026-04-04 01:29:45.638422 | orchestrator | + compute_list 2026-04-04 01:29:45.638505 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:29:47.253785 | orchestrator | 2026-04-04 01:29:47 | ERROR  | Unable to get ansible vault password 2026-04-04 01:29:47.253835 | orchestrator | 2026-04-04 01:29:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:29:47.253841 | orchestrator | 2026-04-04 01:29:47 | ERROR  | Dropping encrypted entries 2026-04-04 01:29:48.312227 | orchestrator | +------+--------+----------+ 2026-04-04 01:29:48.312289 | orchestrator | | ID | Name | Status | 2026-04-04 01:29:48.312299 | orchestrator | |------+--------+----------| 2026-04-04 01:29:48.312305 | orchestrator | +------+--------+----------+ 2026-04-04 01:29:48.625507 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:29:50.150253 | orchestrator | 2026-04-04 01:29:50 | ERROR  | Unable to get ansible vault password 2026-04-04 01:29:50.150366 | orchestrator | 2026-04-04 01:29:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:29:50.150379 | orchestrator | 2026-04-04 01:29:50 | ERROR  | Dropping encrypted entries 2026-04-04 01:29:51.916582 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:29:51.916692 | orchestrator | | ID | Name | Status | 2026-04-04 01:29:51.916702 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:29:51.916710 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | 2026-04-04 01:29:51.916716 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | 2026-04-04 01:29:51.916723 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | 2026-04-04 01:29:51.916730 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | 2026-04-04 01:29:51.917436 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | 2026-04-04 01:29:51.917474 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:29:52.240543 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:29:53.808030 | orchestrator | 2026-04-04 01:29:53 | ERROR  | Unable to get ansible vault password 2026-04-04 01:29:53.808161 | orchestrator | 2026-04-04 01:29:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:29:53.808176 | orchestrator | 2026-04-04 01:29:53 | ERROR  | Dropping encrypted entries 2026-04-04 01:29:54.904442 | orchestrator | +------+--------+----------+ 2026-04-04 01:29:54.904502 | orchestrator | | ID | Name | Status | 2026-04-04 01:29:54.904510 | orchestrator | |------+--------+----------| 2026-04-04 01:29:54.904517 | orchestrator | +------+--------+----------+ 2026-04-04 01:29:55.269486 | orchestrator | + server_ping 2026-04-04 01:29:55.270358 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:29:55.270401 | orchestrator | ++ tr -d '\r' 2026-04-04 01:29:58.160187 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:29:58.160260 | orchestrator | + ping -c3 192.168.112.182 2026-04-04 01:29:58.170234 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-04 01:29:58.170306 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=8.32 ms 2026-04-04 01:29:59.165840 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.33 ms 2026-04-04 01:30:00.168763 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.87 ms 2026-04-04 01:30:00.168836 | orchestrator | 2026-04-04 01:30:00.168842 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-04 01:30:00.168847 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:30:00.168852 | orchestrator | rtt min/avg/max/mdev = 1.873/4.175/8.324/2.939 ms 2026-04-04 01:30:00.169347 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:30:00.169409 | orchestrator | + ping -c3 192.168.112.167 2026-04-04 01:30:00.179662 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-04-04 01:30:00.179750 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=7.15 ms 2026-04-04 01:30:01.176804 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.80 ms 2026-04-04 01:30:02.176655 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=0.980 ms 2026-04-04 01:30:02.176714 | orchestrator | 2026-04-04 01:30:02.176723 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-04-04 01:30:02.176731 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:30:02.176738 | orchestrator | rtt min/avg/max/mdev = 0.980/3.642/7.148/2.587 ms 2026-04-04 01:30:02.177310 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:30:02.177341 | orchestrator | + ping -c3 192.168.112.117 2026-04-04 01:30:02.186364 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-04 01:30:02.186442 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=4.34 ms 2026-04-04 01:30:03.185016 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.34 ms 2026-04-04 01:30:04.187049 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.21 ms 2026-04-04 01:30:04.187120 | orchestrator | 2026-04-04 01:30:04.187131 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-04 01:30:04.187170 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:30:04.187177 | orchestrator | rtt min/avg/max/mdev = 1.214/2.297/4.336/1.442 ms 2026-04-04 01:30:04.187241 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:30:04.187250 | orchestrator | + ping -c3 192.168.112.136 2026-04-04 01:30:04.196481 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-04 01:30:04.196542 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=3.08 ms 2026-04-04 01:30:05.197357 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.30 ms 2026-04-04 01:30:06.198192 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.16 ms 2026-04-04 01:30:06.198295 | orchestrator | 2026-04-04 01:30:06.198305 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-04 01:30:06.198312 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:30:06.198319 | orchestrator | rtt min/avg/max/mdev = 1.158/2.180/3.080/0.789 ms 2026-04-04 01:30:06.198378 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:30:06.198389 | orchestrator | + ping -c3 192.168.112.188 2026-04-04 01:30:06.209673 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-04 01:30:06.209720 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.62 ms 2026-04-04 01:30:07.206479 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.65 ms 2026-04-04 01:30:08.209484 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.39 ms 2026-04-04 01:30:08.209596 | orchestrator | 2026-04-04 01:30:08.209612 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-04 01:30:08.210130 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:30:08.210168 | orchestrator | rtt min/avg/max/mdev = 1.392/3.221/6.618/2.404 ms 2026-04-04 01:30:08.210183 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-04 01:30:09.812949 | orchestrator | 2026-04-04 01:30:09 | ERROR  | Unable to get ansible vault password 2026-04-04 01:30:09.813011 | orchestrator | 2026-04-04 01:30:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:30:09.813019 | orchestrator | 2026-04-04 01:30:09 | ERROR  | Dropping encrypted entries 2026-04-04 01:30:11.382074 | orchestrator | 2026-04-04 01:30:11 | INFO  | Live migrating server 7e50c4b8-466c-4930-b43d-8a691787941d 2026-04-04 01:30:22.784319 | orchestrator | 2026-04-04 01:30:22 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:25.162255 | orchestrator | 2026-04-04 01:30:25 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:27.448156 | orchestrator | 2026-04-04 01:30:27 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:29.769197 | orchestrator | 2026-04-04 01:30:29 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:32.072483 | orchestrator | 2026-04-04 01:30:32 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:34.320597 | orchestrator | 2026-04-04 01:30:34 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:36.710852 | orchestrator | 2026-04-04 01:30:36 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:39.004205 | orchestrator | 2026-04-04 01:30:39 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:41.233858 | orchestrator | 2026-04-04 01:30:41 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:43.482489 | orchestrator | 2026-04-04 01:30:43 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:45.849134 | orchestrator | 2026-04-04 01:30:45 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) is still in progress 2026-04-04 01:30:48.225849 | orchestrator | 2026-04-04 01:30:48 | INFO  | Live migration of 7e50c4b8-466c-4930-b43d-8a691787941d (test-4) completed with status ACTIVE 2026-04-04 01:30:48.226117 | orchestrator | 2026-04-04 01:30:48 | INFO  | Live migrating server f6a8ec72-d283-4b08-9ad1-5872494cf29d 2026-04-04 01:30:58.933327 | orchestrator | 2026-04-04 01:30:58 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:01.232474 | orchestrator | 2026-04-04 01:31:01 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:03.598873 | orchestrator | 2026-04-04 01:31:03 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:05.960805 | orchestrator | 2026-04-04 01:31:05 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:08.329566 | orchestrator | 2026-04-04 01:31:08 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:10.628335 | orchestrator | 2026-04-04 01:31:10 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:12.864257 | orchestrator | 2026-04-04 01:31:12 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:15.184254 | orchestrator | 2026-04-04 01:31:15 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) is still in progress 2026-04-04 01:31:17.503852 | orchestrator | 2026-04-04 01:31:17 | INFO  | Live migration of f6a8ec72-d283-4b08-9ad1-5872494cf29d (test-3) completed with status ACTIVE 2026-04-04 01:31:17.503963 | orchestrator | 2026-04-04 01:31:17 | INFO  | Live migrating server 1b50d71d-6e54-49f7-82a9-386ac20253f3 2026-04-04 01:31:27.281096 | orchestrator | 2026-04-04 01:31:27 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:29.648298 | orchestrator | 2026-04-04 01:31:29 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:32.010908 | orchestrator | 2026-04-04 01:31:32 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:34.480829 | orchestrator | 2026-04-04 01:31:34 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:36.796359 | orchestrator | 2026-04-04 01:31:36 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:39.253672 | orchestrator | 2026-04-04 01:31:39 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:41.483484 | orchestrator | 2026-04-04 01:31:41 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:43.739172 | orchestrator | 2026-04-04 01:31:43 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:46.064810 | orchestrator | 2026-04-04 01:31:46 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) is still in progress 2026-04-04 01:31:48.363732 | orchestrator | 2026-04-04 01:31:48 | INFO  | Live migration of 1b50d71d-6e54-49f7-82a9-386ac20253f3 (test-2) completed with status ACTIVE 2026-04-04 01:31:48.363786 | orchestrator | 2026-04-04 01:31:48 | INFO  | Live migrating server 72b650d0-74af-4ee6-aa5f-93ae779b1e72 2026-04-04 01:31:58.182721 | orchestrator | 2026-04-04 01:31:58 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:00.629536 | orchestrator | 2026-04-04 01:32:00 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:02.923754 | orchestrator | 2026-04-04 01:32:02 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:05.150734 | orchestrator | 2026-04-04 01:32:05 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:07.414803 | orchestrator | 2026-04-04 01:32:07 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:09.696473 | orchestrator | 2026-04-04 01:32:09 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:12.058060 | orchestrator | 2026-04-04 01:32:12 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:14.415519 | orchestrator | 2026-04-04 01:32:14 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:16.663602 | orchestrator | 2026-04-04 01:32:16 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:18.949114 | orchestrator | 2026-04-04 01:32:18 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) is still in progress 2026-04-04 01:32:21.259502 | orchestrator | 2026-04-04 01:32:21 | INFO  | Live migration of 72b650d0-74af-4ee6-aa5f-93ae779b1e72 (test) completed with status ACTIVE 2026-04-04 01:32:21.259587 | orchestrator | 2026-04-04 01:32:21 | INFO  | Live migrating server 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a 2026-04-04 01:32:30.957935 | orchestrator | 2026-04-04 01:32:30 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:33.188931 | orchestrator | 2026-04-04 01:32:33 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:35.490733 | orchestrator | 2026-04-04 01:32:35 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:37.706693 | orchestrator | 2026-04-04 01:32:37 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:39.972422 | orchestrator | 2026-04-04 01:32:39 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:42.407693 | orchestrator | 2026-04-04 01:32:42 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:44.785815 | orchestrator | 2026-04-04 01:32:44 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:47.137877 | orchestrator | 2026-04-04 01:32:47 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) is still in progress 2026-04-04 01:32:49.428161 | orchestrator | 2026-04-04 01:32:49 | INFO  | Live migration of 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a (test-1) completed with status ACTIVE 2026-04-04 01:32:49.726558 | orchestrator | + compute_list 2026-04-04 01:32:49.726603 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:32:51.373968 | orchestrator | 2026-04-04 01:32:51 | ERROR  | Unable to get ansible vault password 2026-04-04 01:32:51.374067 | orchestrator | 2026-04-04 01:32:51 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:32:51.374104 | orchestrator | 2026-04-04 01:32:51 | ERROR  | Dropping encrypted entries 2026-04-04 01:32:52.460806 | orchestrator | +------+--------+----------+ 2026-04-04 01:32:52.460862 | orchestrator | | ID | Name | Status | 2026-04-04 01:32:52.460868 | orchestrator | |------+--------+----------| 2026-04-04 01:32:52.460872 | orchestrator | +------+--------+----------+ 2026-04-04 01:32:52.834382 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:32:54.401997 | orchestrator | 2026-04-04 01:32:54 | ERROR  | Unable to get ansible vault password 2026-04-04 01:32:54.402098 | orchestrator | 2026-04-04 01:32:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:32:54.402111 | orchestrator | 2026-04-04 01:32:54 | ERROR  | Dropping encrypted entries 2026-04-04 01:32:55.439604 | orchestrator | +------+--------+----------+ 2026-04-04 01:32:55.439662 | orchestrator | | ID | Name | Status | 2026-04-04 01:32:55.439670 | orchestrator | |------+--------+----------| 2026-04-04 01:32:55.439678 | orchestrator | +------+--------+----------+ 2026-04-04 01:32:55.738721 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:32:57.338226 | orchestrator | 2026-04-04 01:32:57 | ERROR  | Unable to get ansible vault password 2026-04-04 01:32:57.338283 | orchestrator | 2026-04-04 01:32:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:32:57.338292 | orchestrator | 2026-04-04 01:32:57 | ERROR  | Dropping encrypted entries 2026-04-04 01:32:58.861412 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:32:58.861580 | orchestrator | | ID | Name | Status | 2026-04-04 01:32:58.861591 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:32:58.861598 | orchestrator | | 7e50c4b8-466c-4930-b43d-8a691787941d | test-4 | ACTIVE | 2026-04-04 01:32:58.861605 | orchestrator | | f6a8ec72-d283-4b08-9ad1-5872494cf29d | test-3 | ACTIVE | 2026-04-04 01:32:58.861611 | orchestrator | | 1b50d71d-6e54-49f7-82a9-386ac20253f3 | test-2 | ACTIVE | 2026-04-04 01:32:58.861618 | orchestrator | | 72b650d0-74af-4ee6-aa5f-93ae779b1e72 | test | ACTIVE | 2026-04-04 01:32:58.861624 | orchestrator | | 7d493c7d-6100-4b1d-9c6b-f29ee6a0b29a | test-1 | ACTIVE | 2026-04-04 01:32:58.861630 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:32:59.187648 | orchestrator | + server_ping 2026-04-04 01:32:59.187719 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:32:59.188028 | orchestrator | ++ tr -d '\r' 2026-04-04 01:33:02.488105 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:33:02.488178 | orchestrator | + ping -c3 192.168.112.182 2026-04-04 01:33:02.497321 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-04-04 01:33:02.497394 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=6.19 ms 2026-04-04 01:33:03.494072 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.58 ms 2026-04-04 01:33:04.496248 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.35 ms 2026-04-04 01:33:04.496309 | orchestrator | 2026-04-04 01:33:04.496315 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-04-04 01:33:04.496321 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:33:04.496325 | orchestrator | rtt min/avg/max/mdev = 1.353/3.039/6.185/2.226 ms 2026-04-04 01:33:04.496862 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:33:04.496923 | orchestrator | + ping -c3 192.168.112.167 2026-04-04 01:33:04.505080 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-04-04 01:33:04.505139 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=3.91 ms 2026-04-04 01:33:05.504534 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.07 ms 2026-04-04 01:33:06.505509 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-04 01:33:06.505603 | orchestrator | 2026-04-04 01:33:06.505650 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-04-04 01:33:06.505660 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-04 01:33:06.505667 | orchestrator | rtt min/avg/max/mdev = 1.895/2.625/3.908/0.909 ms 2026-04-04 01:33:06.505675 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:33:06.505812 | orchestrator | + ping -c3 192.168.112.117 2026-04-04 01:33:06.518499 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-04-04 01:33:06.518617 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=7.31 ms 2026-04-04 01:33:07.515178 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.27 ms 2026-04-04 01:33:08.516281 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-04 01:33:08.516654 | orchestrator | 2026-04-04 01:33:08.516678 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-04-04 01:33:08.516685 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:33:08.516690 | orchestrator | rtt min/avg/max/mdev = 1.640/3.740/7.307/2.535 ms 2026-04-04 01:33:08.517307 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:33:08.517409 | orchestrator | + ping -c3 192.168.112.136 2026-04-04 01:33:08.528636 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-04 01:33:08.528854 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=6.41 ms 2026-04-04 01:33:09.525064 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=1.50 ms 2026-04-04 01:33:10.527155 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.25 ms 2026-04-04 01:33:10.527206 | orchestrator | 2026-04-04 01:33:10.527212 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-04 01:33:10.527217 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:33:10.527221 | orchestrator | rtt min/avg/max/mdev = 1.247/3.052/6.413/2.378 ms 2026-04-04 01:33:10.527847 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:33:10.527870 | orchestrator | + ping -c3 192.168.112.188 2026-04-04 01:33:10.536765 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-04 01:33:10.536809 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=4.78 ms 2026-04-04 01:33:11.534590 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.49 ms 2026-04-04 01:33:12.536086 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.15 ms 2026-04-04 01:33:12.536147 | orchestrator | 2026-04-04 01:33:12.536523 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-04 01:33:12.536550 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:33:12.536556 | orchestrator | rtt min/avg/max/mdev = 1.148/2.471/4.780/1.638 ms 2026-04-04 01:33:12.783684 | orchestrator | ok: Runtime: 0:17:32.044483 2026-04-04 01:33:12.847710 | 2026-04-04 01:33:12.847908 | TASK [Run tempest] 2026-04-04 01:33:13.519573 | orchestrator | + set -e 2026-04-04 01:33:13.519680 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:33:13.519696 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:33:13.519705 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:33:13.519715 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:33:13.519722 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:33:13.519730 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:33:13.519782 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:33:13.519795 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:33:13.519806 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:33:13.519813 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-04 01:33:13.519823 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-04 01:33:13.519828 | orchestrator | ++ export ARA=false 2026-04-04 01:33:13.519831 | orchestrator | ++ ARA=false 2026-04-04 01:33:13.519837 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:33:13.519841 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:33:13.519845 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:33:13.519850 | orchestrator | ++ TEMPEST=true 2026-04-04 01:33:13.519857 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:33:13.519863 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:33:13.519870 | orchestrator | 2026-04-04 01:33:13.519876 | orchestrator | # Tempest 2026-04-04 01:33:13.519883 | orchestrator | 2026-04-04 01:33:13.519892 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:33:13.519898 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-04-04 01:33:13.519904 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:33:13.519910 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:33:13.519916 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:33:13.519922 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:33:13.519942 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:33:13.519949 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:33:13.519955 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:33:13.519962 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:33:13.519969 | orchestrator | + echo 2026-04-04 01:33:13.519976 | orchestrator | + echo '# Tempest' 2026-04-04 01:33:13.519982 | orchestrator | + echo 2026-04-04 01:33:13.519989 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-04 01:33:13.519995 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-04 01:33:24.681166 | orchestrator | 2026-04-04 01:33:24 | INFO  | Prepare task for execution of tempest. 2026-04-04 01:33:24.762273 | orchestrator | 2026-04-04 01:33:24 | INFO  | Task f8c0f38f-0134-49c7-8f5e-c8f3a4359f83 (tempest) was prepared for execution. 2026-04-04 01:33:24.762352 | orchestrator | 2026-04-04 01:33:24 | INFO  | It takes a moment until task f8c0f38f-0134-49c7-8f5e-c8f3a4359f83 (tempest) has been started and output is visible here. 2026-04-04 01:34:41.136308 | orchestrator | 2026-04-04 01:34:41.136423 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-04 01:34:41.136438 | orchestrator | 2026-04-04 01:34:41.136445 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-04 01:34:41.136464 | orchestrator | Saturday 04 April 2026 01:33:28 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-04-04 01:34:41.136471 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136479 | orchestrator | 2026-04-04 01:34:41.136486 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-04 01:34:41.136493 | orchestrator | Saturday 04 April 2026 01:33:29 +0000 (0:00:00.999) 0:00:01.319 ******** 2026-04-04 01:34:41.136501 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136507 | orchestrator | 2026-04-04 01:34:41.136514 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-04 01:34:41.136521 | orchestrator | Saturday 04 April 2026 01:33:30 +0000 (0:00:01.180) 0:00:02.500 ******** 2026-04-04 01:34:41.136528 | orchestrator | ok: [testbed-manager] 2026-04-04 01:34:41.136536 | orchestrator | 2026-04-04 01:34:41.136543 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-04 01:34:41.136549 | orchestrator | Saturday 04 April 2026 01:33:30 +0000 (0:00:00.449) 0:00:02.950 ******** 2026-04-04 01:34:41.136556 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136563 | orchestrator | 2026-04-04 01:34:41.136570 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-04 01:34:41.136577 | orchestrator | Saturday 04 April 2026 01:33:51 +0000 (0:00:20.884) 0:00:23.834 ******** 2026-04-04 01:34:41.136612 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-04 01:34:41.136620 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-04 01:34:41.136677 | orchestrator | 2026-04-04 01:34:41.136686 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-04 01:34:41.136693 | orchestrator | Saturday 04 April 2026 01:34:00 +0000 (0:00:08.843) 0:00:32.677 ******** 2026-04-04 01:34:41.136699 | orchestrator | ok: [testbed-manager] => { 2026-04-04 01:34:41.136705 | orchestrator |  "changed": false, 2026-04-04 01:34:41.136711 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:34:41.136718 | orchestrator | } 2026-04-04 01:34:41.136725 | orchestrator | 2026-04-04 01:34:41.136731 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-04 01:34:41.136737 | orchestrator | Saturday 04 April 2026 01:34:00 +0000 (0:00:00.176) 0:00:32.854 ******** 2026-04-04 01:34:41.136743 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136765 | orchestrator | 2026-04-04 01:34:41.136773 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-04 01:34:41.136778 | orchestrator | Saturday 04 April 2026 01:34:04 +0000 (0:00:03.629) 0:00:36.483 ******** 2026-04-04 01:34:41.136782 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136785 | orchestrator | 2026-04-04 01:34:41.136789 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-04 01:34:41.136800 | orchestrator | Saturday 04 April 2026 01:34:06 +0000 (0:00:01.882) 0:00:38.366 ******** 2026-04-04 01:34:41.136804 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136808 | orchestrator | 2026-04-04 01:34:41.136812 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-04 01:34:41.136816 | orchestrator | Saturday 04 April 2026 01:34:09 +0000 (0:00:03.756) 0:00:42.122 ******** 2026-04-04 01:34:41.136820 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136824 | orchestrator | 2026-04-04 01:34:41.136828 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-04 01:34:41.136832 | orchestrator | Saturday 04 April 2026 01:34:10 +0000 (0:00:00.186) 0:00:42.308 ******** 2026-04-04 01:34:41.136835 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136840 | orchestrator | 2026-04-04 01:34:41.136844 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-04 01:34:41.136848 | orchestrator | Saturday 04 April 2026 01:34:12 +0000 (0:00:02.675) 0:00:44.984 ******** 2026-04-04 01:34:41.136852 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136856 | orchestrator | 2026-04-04 01:34:41.136860 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-04 01:34:41.136864 | orchestrator | Saturday 04 April 2026 01:34:21 +0000 (0:00:08.790) 0:00:53.775 ******** 2026-04-04 01:34:41.136867 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.136871 | orchestrator | 2026-04-04 01:34:41.136875 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-04 01:34:41.136879 | orchestrator | Saturday 04 April 2026 01:34:22 +0000 (0:00:00.696) 0:00:54.471 ******** 2026-04-04 01:34:41.136883 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136886 | orchestrator | 2026-04-04 01:34:41.136890 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-04 01:34:41.136894 | orchestrator | Saturday 04 April 2026 01:34:23 +0000 (0:00:01.522) 0:00:55.993 ******** 2026-04-04 01:34:41.136898 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136901 | orchestrator | 2026-04-04 01:34:41.136905 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-04 01:34:41.136909 | orchestrator | Saturday 04 April 2026 01:34:25 +0000 (0:00:01.553) 0:00:57.546 ******** 2026-04-04 01:34:41.136913 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136916 | orchestrator | 2026-04-04 01:34:41.136920 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-04 01:34:41.136931 | orchestrator | Saturday 04 April 2026 01:34:25 +0000 (0:00:00.184) 0:00:57.731 ******** 2026-04-04 01:34:41.136935 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136939 | orchestrator | 2026-04-04 01:34:41.136950 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-04 01:34:41.136954 | orchestrator | Saturday 04 April 2026 01:34:25 +0000 (0:00:00.382) 0:00:58.114 ******** 2026-04-04 01:34:41.136958 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:34:41.136962 | orchestrator | 2026-04-04 01:34:41.136965 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-04 01:34:41.136986 | orchestrator | Saturday 04 April 2026 01:34:29 +0000 (0:00:03.835) 0:01:01.949 ******** 2026-04-04 01:34:41.136990 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-04 01:34:41.136995 | orchestrator |  "changed": false, 2026-04-04 01:34:41.136999 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:34:41.137003 | orchestrator | } 2026-04-04 01:34:41.137007 | orchestrator | 2026-04-04 01:34:41.137011 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-04 01:34:41.137015 | orchestrator | Saturday 04 April 2026 01:34:29 +0000 (0:00:00.181) 0:01:02.131 ******** 2026-04-04 01:34:41.137025 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-04 01:34:41.137030 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-04 01:34:41.137034 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:34:41.137038 | orchestrator | 2026-04-04 01:34:41.137041 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-04 01:34:41.137045 | orchestrator | Saturday 04 April 2026 01:34:30 +0000 (0:00:00.183) 0:01:02.314 ******** 2026-04-04 01:34:41.137049 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:34:41.137053 | orchestrator | 2026-04-04 01:34:41.137057 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-04 01:34:41.137060 | orchestrator | Saturday 04 April 2026 01:34:30 +0000 (0:00:00.172) 0:01:02.486 ******** 2026-04-04 01:34:41.137064 | orchestrator | ok: [testbed-manager] 2026-04-04 01:34:41.137068 | orchestrator | 2026-04-04 01:34:41.137071 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-04 01:34:41.137075 | orchestrator | Saturday 04 April 2026 01:34:30 +0000 (0:00:00.453) 0:01:02.940 ******** 2026-04-04 01:34:41.137079 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.137083 | orchestrator | 2026-04-04 01:34:41.137086 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-04 01:34:41.137090 | orchestrator | Saturday 04 April 2026 01:34:31 +0000 (0:00:00.892) 0:01:03.833 ******** 2026-04-04 01:34:41.137094 | orchestrator | ok: [testbed-manager] 2026-04-04 01:34:41.137098 | orchestrator | 2026-04-04 01:34:41.137101 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-04 01:34:41.137105 | orchestrator | Saturday 04 April 2026 01:34:32 +0000 (0:00:00.417) 0:01:04.250 ******** 2026-04-04 01:34:41.137109 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:34:41.137113 | orchestrator | 2026-04-04 01:34:41.137116 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-04 01:34:41.137120 | orchestrator | Saturday 04 April 2026 01:34:32 +0000 (0:00:00.300) 0:01:04.551 ******** 2026-04-04 01:34:41.137124 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-04 01:34:41.137128 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-04 01:34:41.137132 | orchestrator | 2026-04-04 01:34:41.137136 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-04 01:34:41.137139 | orchestrator | Saturday 04 April 2026 01:34:40 +0000 (0:00:07.825) 0:01:12.376 ******** 2026-04-04 01:34:41.137143 | orchestrator | changed: [testbed-manager] 2026-04-04 01:34:41.137151 | orchestrator | 2026-04-04 01:34:41.137155 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:34:41.137159 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:34:41.137164 | orchestrator | 2026-04-04 01:34:41.137167 | orchestrator | 2026-04-04 01:34:41.137171 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:34:41.137175 | orchestrator | Saturday 04 April 2026 01:34:41 +0000 (0:00:00.962) 0:01:13.339 ******** 2026-04-04 01:34:41.137179 | orchestrator | =============================================================================== 2026-04-04 01:34:41.137182 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.88s 2026-04-04 01:34:41.137186 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.84s 2026-04-04 01:34:41.137190 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.79s 2026-04-04 01:34:41.137193 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.83s 2026-04-04 01:34:41.137200 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.84s 2026-04-04 01:34:41.137204 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.76s 2026-04-04 01:34:41.137208 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.63s 2026-04-04 01:34:41.137211 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.68s 2026-04-04 01:34:41.137215 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.88s 2026-04-04 01:34:41.137219 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.55s 2026-04-04 01:34:41.137223 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.52s 2026-04-04 01:34:41.137227 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.18s 2026-04-04 01:34:41.137230 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.00s 2026-04-04 01:34:41.137234 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.96s 2026-04-04 01:34:41.137238 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.89s 2026-04-04 01:34:41.137241 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.70s 2026-04-04 01:34:41.137245 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.45s 2026-04-04 01:34:41.137251 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.45s 2026-04-04 01:34:41.402580 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.42s 2026-04-04 01:34:41.402721 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.38s 2026-04-04 01:34:41.585619 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-04 01:34:41.590201 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-04 01:34:41.594393 | orchestrator | 2026-04-04 01:34:41.594479 | orchestrator | ## IDENTITY (API) 2026-04-04 01:34:41.594489 | orchestrator | 2026-04-04 01:34:41.594497 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-04 01:34:41.594506 | orchestrator | + echo 2026-04-04 01:34:41.594514 | orchestrator | + echo '## IDENTITY (API)' 2026-04-04 01:34:41.594521 | orchestrator | + echo 2026-04-04 01:34:41.594529 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-04 01:34:41.594538 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-04 01:34:41.595069 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-04 01:34:41.595098 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:34:41.596889 | orchestrator | + tee -a /opt/tempest/20260404-0134.log 2026-04-04 01:34:45.345770 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:34:45.345876 | orchestrator | Did you mean one of these? 2026-04-04 01:34:45.345884 | orchestrator | help 2026-04-04 01:34:45.345889 | orchestrator | init 2026-04-04 01:34:45.707074 | orchestrator | 2026-04-04 01:34:45.707158 | orchestrator | ## IMAGE (API) 2026-04-04 01:34:45.707171 | orchestrator | 2026-04-04 01:34:45.707181 | orchestrator | + echo 2026-04-04 01:34:45.707190 | orchestrator | + echo '## IMAGE (API)' 2026-04-04 01:34:45.707199 | orchestrator | + echo 2026-04-04 01:34:45.707207 | orchestrator | + _tempest tempest.api.image.v2 2026-04-04 01:34:45.707215 | orchestrator | + local regex=tempest.api.image.v2 2026-04-04 01:34:45.707669 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-04 01:34:45.709002 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:34:45.713067 | orchestrator | + tee -a /opt/tempest/20260404-0134.log 2026-04-04 01:34:49.164093 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:34:49.164210 | orchestrator | Did you mean one of these? 2026-04-04 01:34:49.164225 | orchestrator | help 2026-04-04 01:34:49.164231 | orchestrator | init 2026-04-04 01:34:49.422211 | orchestrator | 2026-04-04 01:34:49.422294 | orchestrator | ## NETWORK (API) 2026-04-04 01:34:49.422302 | orchestrator | 2026-04-04 01:34:49.422308 | orchestrator | + echo 2026-04-04 01:34:49.422313 | orchestrator | + echo '## NETWORK (API)' 2026-04-04 01:34:49.422319 | orchestrator | + echo 2026-04-04 01:34:49.422325 | orchestrator | + _tempest tempest.api.network 2026-04-04 01:34:49.422330 | orchestrator | + local regex=tempest.api.network 2026-04-04 01:34:49.423068 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-04 01:34:49.423700 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:34:49.426384 | orchestrator | + tee -a /opt/tempest/20260404-0134.log 2026-04-04 01:34:52.669327 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:34:52.669406 | orchestrator | Did you mean one of these? 2026-04-04 01:34:52.669416 | orchestrator | help 2026-04-04 01:34:52.669424 | orchestrator | init 2026-04-04 01:34:52.917660 | orchestrator | 2026-04-04 01:34:52.917741 | orchestrator | ## VOLUME (API) 2026-04-04 01:34:52.917751 | orchestrator | 2026-04-04 01:34:52.917759 | orchestrator | + echo 2026-04-04 01:34:52.917765 | orchestrator | + echo '## VOLUME (API)' 2026-04-04 01:34:52.917773 | orchestrator | + echo 2026-04-04 01:34:52.917780 | orchestrator | + _tempest tempest.api.volume 2026-04-04 01:34:52.917786 | orchestrator | + local regex=tempest.api.volume 2026-04-04 01:34:52.917819 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-04 01:34:52.918701 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:34:52.924089 | orchestrator | + tee -a /opt/tempest/20260404-0134.log 2026-04-04 01:34:56.161386 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:34:56.161463 | orchestrator | Did you mean one of these? 2026-04-04 01:34:56.161472 | orchestrator | help 2026-04-04 01:34:56.161476 | orchestrator | init 2026-04-04 01:34:56.443591 | orchestrator | 2026-04-04 01:34:56.443684 | orchestrator | ## COMPUTE (API) 2026-04-04 01:34:56.443693 | orchestrator | 2026-04-04 01:34:56.443698 | orchestrator | + echo 2026-04-04 01:34:56.443702 | orchestrator | + echo '## COMPUTE (API)' 2026-04-04 01:34:56.443707 | orchestrator | + echo 2026-04-04 01:34:56.443711 | orchestrator | + _tempest tempest.api.compute 2026-04-04 01:34:56.443733 | orchestrator | + local regex=tempest.api.compute 2026-04-04 01:34:56.444765 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-04 01:34:56.446705 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:34:56.450254 | orchestrator | + tee -a /opt/tempest/20260404-0134.log 2026-04-04 01:35:00.015048 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:35:00.015165 | orchestrator | Did you mean one of these? 2026-04-04 01:35:00.015182 | orchestrator | help 2026-04-04 01:35:00.015190 | orchestrator | init 2026-04-04 01:35:00.370337 | orchestrator | 2026-04-04 01:35:00.370408 | orchestrator | ## DNS (API) 2026-04-04 01:35:00.370414 | orchestrator | 2026-04-04 01:35:00.370418 | orchestrator | + echo 2026-04-04 01:35:00.370422 | orchestrator | + echo '## DNS (API)' 2026-04-04 01:35:00.370428 | orchestrator | + echo 2026-04-04 01:35:00.370432 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-04 01:35:00.370437 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-04 01:35:00.370443 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-04 01:35:00.371420 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:35:00.373928 | orchestrator | + tee -a /opt/tempest/20260404-0135.log 2026-04-04 01:35:03.990269 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:35:03.990383 | orchestrator | Did you mean one of these? 2026-04-04 01:35:03.990396 | orchestrator | help 2026-04-04 01:35:03.990404 | orchestrator | init 2026-04-04 01:35:04.247291 | orchestrator | 2026-04-04 01:35:04.247358 | orchestrator | ## OBJECT-STORE (API) 2026-04-04 01:35:04.247365 | orchestrator | 2026-04-04 01:35:04.247370 | orchestrator | + echo 2026-04-04 01:35:04.247374 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-04 01:35:04.247378 | orchestrator | + echo 2026-04-04 01:35:04.247382 | orchestrator | + _tempest tempest.api.object_storage 2026-04-04 01:35:04.247387 | orchestrator | + local regex=tempest.api.object_storage 2026-04-04 01:35:04.247393 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-04 01:35:04.248078 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:35:04.249808 | orchestrator | + tee -a /opt/tempest/20260404-0135.log 2026-04-04 01:35:07.856038 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:35:07.856121 | orchestrator | Did you mean one of these? 2026-04-04 01:35:07.856131 | orchestrator | help 2026-04-04 01:35:07.856138 | orchestrator | init 2026-04-04 01:35:08.474463 | orchestrator | ok: Runtime: 0:01:55.050289 2026-04-04 01:35:08.497497 | 2026-04-04 01:35:08.497691 | TASK [Check prometheus alert status] 2026-04-04 01:35:09.038156 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:09.041884 | 2026-04-04 01:35:09.042034 | PLAY RECAP 2026-04-04 01:35:09.042146 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-04 01:35:09.042198 | 2026-04-04 01:35:09.276095 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-04 01:35:09.277252 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-04 01:35:10.021433 | 2026-04-04 01:35:10.021611 | PLAY [Post output play] 2026-04-04 01:35:10.039050 | 2026-04-04 01:35:10.039220 | LOOP [stage-output : Register sources] 2026-04-04 01:35:10.122515 | 2026-04-04 01:35:10.122940 | TASK [stage-output : Check sudo] 2026-04-04 01:35:10.977088 | orchestrator | sudo: a password is required 2026-04-04 01:35:11.165685 | orchestrator | ok: Runtime: 0:00:00.011465 2026-04-04 01:35:11.180981 | 2026-04-04 01:35:11.181172 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-04 01:35:11.222949 | 2026-04-04 01:35:11.223314 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-04 01:35:11.292654 | orchestrator | ok 2026-04-04 01:35:11.301792 | 2026-04-04 01:35:11.301949 | LOOP [stage-output : Ensure target folders exist] 2026-04-04 01:35:11.782177 | orchestrator | ok: "docs" 2026-04-04 01:35:11.782615 | 2026-04-04 01:35:12.079415 | orchestrator | ok: "artifacts" 2026-04-04 01:35:12.356224 | orchestrator | ok: "logs" 2026-04-04 01:35:12.376853 | 2026-04-04 01:35:12.377039 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-04 01:35:12.418262 | 2026-04-04 01:35:12.418580 | TASK [stage-output : Make all log files readable] 2026-04-04 01:35:12.737149 | orchestrator | ok 2026-04-04 01:35:12.747531 | 2026-04-04 01:35:12.747681 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-04 01:35:12.783125 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:12.800159 | 2026-04-04 01:35:12.800333 | TASK [stage-output : Discover log files for compression] 2026-04-04 01:35:12.825750 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:12.837536 | 2026-04-04 01:35:12.837703 | LOOP [stage-output : Archive everything from logs] 2026-04-04 01:35:12.889581 | 2026-04-04 01:35:12.889782 | PLAY [Post cleanup play] 2026-04-04 01:35:12.898172 | 2026-04-04 01:35:12.898289 | TASK [Set cloud fact (Zuul deployment)] 2026-04-04 01:35:12.954740 | orchestrator | ok 2026-04-04 01:35:12.966025 | 2026-04-04 01:35:12.966154 | TASK [Set cloud fact (local deployment)] 2026-04-04 01:35:12.990488 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:13.005891 | 2026-04-04 01:35:13.006034 | TASK [Clean the cloud environment] 2026-04-04 01:35:13.661913 | orchestrator | 2026-04-04 01:35:13 - clean up servers 2026-04-04 01:35:14.461640 | orchestrator | 2026-04-04 01:35:14 - testbed-manager 2026-04-04 01:35:14.542125 | orchestrator | 2026-04-04 01:35:14 - testbed-node-1 2026-04-04 01:35:14.629458 | orchestrator | 2026-04-04 01:35:14 - testbed-node-2 2026-04-04 01:35:14.720671 | orchestrator | 2026-04-04 01:35:14 - testbed-node-5 2026-04-04 01:35:14.815578 | orchestrator | 2026-04-04 01:35:14 - testbed-node-4 2026-04-04 01:35:14.909024 | orchestrator | 2026-04-04 01:35:14 - testbed-node-3 2026-04-04 01:35:15.002841 | orchestrator | 2026-04-04 01:35:15 - testbed-node-0 2026-04-04 01:35:15.102219 | orchestrator | 2026-04-04 01:35:15 - clean up keypairs 2026-04-04 01:35:15.119336 | orchestrator | 2026-04-04 01:35:15 - testbed 2026-04-04 01:35:15.140047 | orchestrator | 2026-04-04 01:35:15 - wait for servers to be gone 2026-04-04 01:35:28.173006 | orchestrator | 2026-04-04 01:35:28 - clean up ports 2026-04-04 01:35:28.388411 | orchestrator | 2026-04-04 01:35:28 - 2e7338c2-408c-4b35-a240-74ba095e7368 2026-04-04 01:35:28.638165 | orchestrator | 2026-04-04 01:35:28 - 56bab0fb-99e7-4848-8cde-3bde9548a17c 2026-04-04 01:35:29.003381 | orchestrator | 2026-04-04 01:35:29 - 82157739-a4c7-48f3-bf06-01714e43fe47 2026-04-04 01:35:29.284907 | orchestrator | 2026-04-04 01:35:29 - 89781c4d-21d7-41c6-a8bf-c34f4b2147d3 2026-04-04 01:35:29.553652 | orchestrator | 2026-04-04 01:35:29 - 9b7e1f8d-ac7a-45ef-9f93-16c7e33bd25c 2026-04-04 01:35:29.764937 | orchestrator | 2026-04-04 01:35:29 - a5da15e1-7f55-477a-8d0a-634f1b9ef004 2026-04-04 01:35:30.164729 | orchestrator | 2026-04-04 01:35:30 - e1791048-c38c-471a-9496-6ca34b84425c 2026-04-04 01:35:30.370329 | orchestrator | 2026-04-04 01:35:30 - clean up volumes 2026-04-04 01:35:30.487013 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-3-node-base 2026-04-04 01:35:30.528544 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-1-node-base 2026-04-04 01:35:30.570379 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-0-node-base 2026-04-04 01:35:30.612881 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-2-node-base 2026-04-04 01:35:30.653601 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-4-node-base 2026-04-04 01:35:30.694304 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-5-node-base 2026-04-04 01:35:30.735556 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-manager-base 2026-04-04 01:35:30.777230 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-1-node-4 2026-04-04 01:35:30.819009 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-2-node-5 2026-04-04 01:35:30.862666 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-6-node-3 2026-04-04 01:35:30.902980 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-0-node-3 2026-04-04 01:35:30.946189 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-3-node-3 2026-04-04 01:35:30.988630 | orchestrator | 2026-04-04 01:35:30 - testbed-volume-7-node-4 2026-04-04 01:35:31.031079 | orchestrator | 2026-04-04 01:35:31 - testbed-volume-5-node-5 2026-04-04 01:35:31.076745 | orchestrator | 2026-04-04 01:35:31 - testbed-volume-8-node-5 2026-04-04 01:35:31.118461 | orchestrator | 2026-04-04 01:35:31 - testbed-volume-4-node-4 2026-04-04 01:35:31.157910 | orchestrator | 2026-04-04 01:35:31 - disconnect routers 2026-04-04 01:35:31.271136 | orchestrator | 2026-04-04 01:35:31 - testbed 2026-04-04 01:35:32.124875 | orchestrator | 2026-04-04 01:35:32 - clean up subnets 2026-04-04 01:35:32.175840 | orchestrator | 2026-04-04 01:35:32 - subnet-testbed-management 2026-04-04 01:35:32.316555 | orchestrator | 2026-04-04 01:35:32 - clean up networks 2026-04-04 01:35:32.453831 | orchestrator | 2026-04-04 01:35:32 - net-testbed-management 2026-04-04 01:35:32.718446 | orchestrator | 2026-04-04 01:35:32 - clean up security groups 2026-04-04 01:35:32.758824 | orchestrator | 2026-04-04 01:35:32 - testbed-node 2026-04-04 01:35:32.869285 | orchestrator | 2026-04-04 01:35:32 - testbed-management 2026-04-04 01:35:32.973746 | orchestrator | 2026-04-04 01:35:32 - clean up floating ips 2026-04-04 01:35:33.014171 | orchestrator | 2026-04-04 01:35:33 - 81.163.193.182 2026-04-04 01:35:33.352130 | orchestrator | 2026-04-04 01:35:33 - clean up routers 2026-04-04 01:35:33.413207 | orchestrator | 2026-04-04 01:35:33 - testbed 2026-04-04 01:35:35.063084 | orchestrator | ok: Runtime: 0:00:21.437556 2026-04-04 01:35:35.068349 | 2026-04-04 01:35:35.068554 | PLAY RECAP 2026-04-04 01:35:35.068687 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-04 01:35:35.068753 | 2026-04-04 01:35:35.220267 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-04 01:35:35.222545 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-04 01:35:35.993761 | 2026-04-04 01:35:35.993934 | PLAY [Cleanup play] 2026-04-04 01:35:36.010248 | 2026-04-04 01:35:36.010394 | TASK [Set cloud fact (Zuul deployment)] 2026-04-04 01:35:36.087826 | orchestrator | ok 2026-04-04 01:35:36.098539 | 2026-04-04 01:35:36.098710 | TASK [Set cloud fact (local deployment)] 2026-04-04 01:35:36.134013 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:36.149403 | 2026-04-04 01:35:36.149582 | TASK [Clean the cloud environment] 2026-04-04 01:35:37.362855 | orchestrator | 2026-04-04 01:35:37 - clean up servers 2026-04-04 01:35:37.865070 | orchestrator | 2026-04-04 01:35:37 - clean up keypairs 2026-04-04 01:35:37.883478 | orchestrator | 2026-04-04 01:35:37 - wait for servers to be gone 2026-04-04 01:35:37.929926 | orchestrator | 2026-04-04 01:35:37 - clean up ports 2026-04-04 01:35:38.016227 | orchestrator | 2026-04-04 01:35:38 - clean up volumes 2026-04-04 01:35:38.091790 | orchestrator | 2026-04-04 01:35:38 - disconnect routers 2026-04-04 01:35:38.124379 | orchestrator | 2026-04-04 01:35:38 - clean up subnets 2026-04-04 01:35:38.148685 | orchestrator | 2026-04-04 01:35:38 - clean up networks 2026-04-04 01:35:38.325721 | orchestrator | 2026-04-04 01:35:38 - clean up security groups 2026-04-04 01:35:38.369738 | orchestrator | 2026-04-04 01:35:38 - clean up floating ips 2026-04-04 01:35:38.394980 | orchestrator | 2026-04-04 01:35:38 - clean up routers 2026-04-04 01:35:38.690967 | orchestrator | ok: Runtime: 0:00:01.467643 2026-04-04 01:35:38.694230 | 2026-04-04 01:35:38.694385 | PLAY RECAP 2026-04-04 01:35:38.694556 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-04 01:35:38.694616 | 2026-04-04 01:35:38.826977 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-04 01:35:38.828115 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-04 01:35:39.589229 | 2026-04-04 01:35:39.589396 | PLAY [Base post-fetch] 2026-04-04 01:35:39.605243 | 2026-04-04 01:35:39.605382 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-04 01:35:39.661336 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:39.677218 | 2026-04-04 01:35:39.677480 | TASK [fetch-output : Set log path for single node] 2026-04-04 01:35:39.736508 | orchestrator | ok 2026-04-04 01:35:39.745242 | 2026-04-04 01:35:39.745379 | LOOP [fetch-output : Ensure local output dirs] 2026-04-04 01:35:40.246582 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/logs" 2026-04-04 01:35:40.544379 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/artifacts" 2026-04-04 01:35:40.837990 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/65ae36fb71e247b4b6ac5f1c3db290c9/work/docs" 2026-04-04 01:35:40.865245 | 2026-04-04 01:35:40.865486 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-04 01:35:41.815075 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:35:41.815723 | orchestrator | changed: All items complete 2026-04-04 01:35:41.815937 | 2026-04-04 01:35:42.571080 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:35:43.349287 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:35:43.375019 | 2026-04-04 01:35:43.375243 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-04 01:35:43.414610 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:43.417415 | orchestrator | skipping: Conditional result was False 2026-04-04 01:35:43.437248 | 2026-04-04 01:35:43.437365 | PLAY RECAP 2026-04-04 01:35:43.437465 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-04 01:35:43.437510 | 2026-04-04 01:35:43.573019 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-04 01:35:43.574093 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-04 01:35:44.310335 | 2026-04-04 01:35:44.310553 | PLAY [Base post] 2026-04-04 01:35:44.325203 | 2026-04-04 01:35:44.325345 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-04 01:35:45.362168 | orchestrator | changed 2026-04-04 01:35:45.370214 | 2026-04-04 01:35:45.370328 | PLAY RECAP 2026-04-04 01:35:45.370392 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-04 01:35:45.370475 | 2026-04-04 01:35:45.491921 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-04 01:35:45.494281 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-04 01:35:46.279384 | 2026-04-04 01:35:46.279574 | PLAY [Base post-logs] 2026-04-04 01:35:46.290390 | 2026-04-04 01:35:46.290539 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-04 01:35:46.762859 | localhost | changed 2026-04-04 01:35:46.773301 | 2026-04-04 01:35:46.773519 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-04 01:35:46.799541 | localhost | ok 2026-04-04 01:35:46.802645 | 2026-04-04 01:35:46.802748 | TASK [Set zuul-log-path fact] 2026-04-04 01:35:46.817868 | localhost | ok 2026-04-04 01:35:46.825780 | 2026-04-04 01:35:46.825893 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-04 01:35:46.850549 | localhost | ok 2026-04-04 01:35:46.853558 | 2026-04-04 01:35:46.853660 | TASK [upload-logs : Create log directories] 2026-04-04 01:35:47.347064 | localhost | changed 2026-04-04 01:35:47.349988 | 2026-04-04 01:35:47.350094 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-04 01:35:47.884104 | localhost -> localhost | ok: Runtime: 0:00:00.006986 2026-04-04 01:35:47.893592 | 2026-04-04 01:35:47.893837 | TASK [upload-logs : Upload logs to log server] 2026-04-04 01:35:48.513188 | localhost | Output suppressed because no_log was given 2026-04-04 01:35:48.518003 | 2026-04-04 01:35:48.518263 | LOOP [upload-logs : Compress console log and json output] 2026-04-04 01:35:48.579240 | localhost | skipping: Conditional result was False 2026-04-04 01:35:48.584137 | localhost | skipping: Conditional result was False 2026-04-04 01:35:48.592185 | 2026-04-04 01:35:48.592506 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-04 01:35:48.642588 | localhost | skipping: Conditional result was False 2026-04-04 01:35:48.643396 | 2026-04-04 01:35:48.645451 | localhost | skipping: Conditional result was False 2026-04-04 01:35:48.652924 | 2026-04-04 01:35:48.653138 | LOOP [upload-logs : Upload console log and json output]